Meta Strengthens Teen AI Safeguards with New Parental Oversight Tools
According to Meta, the council has already contributed to the development of the new parental insights feature and will continue to provide ongoing guidance as the company expands its AI offerings for younger users.
Meta Platforms Inc. has announced a new set of AI safety features designed to give parents greater visibility and support as teens interact with its artificial intelligence tools across Facebook, Instagram, and Messenger.
The update introduces an “Insights” tab within parental supervision tools, allowing parents to see the general topics their teens have been discussing with Meta AI over the past seven days. These topics may include areas such as school, entertainment, lifestyle, travel, writing, health, and wellbeing. Parents will also be able to explore sub-categories within these topics to better understand the nature of the conversations.
The feature is now available for parents supervising Teen Accounts in the United States, United Kingdom, Australia, Canada, and Brazil, with a global rollout planned in the coming weeks. According to Meta, the goal is to help families better understand how teens are engaging with AI while maintaining appropriate safeguards around privacy and safety.
In addition to visibility tools, Meta is also introducing conversation starters developed in collaboration with the Cyberbullying Research Center. These prompts are intended to help parents initiate open, non-judgmental discussions with their teens about AI usage and online experiences, offering guidance on how to approach sensitive or unfamiliar topics.
The company noted that these updates build on existing parental supervision features, which already allow guardians to set screen time limits, schedule breaks, and view recent interactions. Meta also said the number of teens enrolled in supervision in the United States has more than doubled over the past year, reflecting growing adoption of its family safety tools.
For more sensitive issues such as suicide and self-harm, Meta said it is developing enhanced alert systems that will notify parents if teens attempt to engage with such topics through its AI assistant. The company emphasised that even when its AI does not respond to certain questions due to safety restrictions, parents will still be able to see the topics their teen attempted to explore.
Meta is also establishing a new AI Wellbeing Expert Council, comprising specialists in youth development, mental health, and responsible AI. The council includes experts affiliated with institutions such as the University of Michigan, the University of Texas, and the University of Southern California, as well as organisations focused on suicide prevention and youth wellbeing. The group will advise Meta on ensuring its AI tools remain safe and age-appropriate for teen users.
According to Meta, the council has already contributed to the development of the new parental insights feature and will continue to provide ongoing guidance as the company expands its AI offerings for younger users.

