Instagram will introduce a parental-guidance system modeled on the PG-13 movie rating, tightening restrictions on what teenagers can view and interact with on the social-media platform, parent company Meta announced Tuesday.

The new policy, which will apply automatically to all users under 18, aims to give parents stronger oversight and align the app's content standards with an "independent standard that parents are familiar with," Meta said. Teens will be placed in a 13+ setting by default and can opt out only with parental approval.

Meta said the shift is designed to make teen experiences "feel closer to the Instagram equivalent of watching a PG-13 movie." The company added that while the app already limits exposure to sexually suggestive and violent material, the PG-13 framework will extend restrictions to posts featuring strong language, risky stunts, or depictions of drug paraphernalia. It will also block searches for terms such as "alcohol" or "gore," including misspellings, and prevent the recommendation of age-inappropriate accounts.

"Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram - but we're going to keep doing all we can to keep those instances as rare as possible," Meta said in a blog post. The company also said its AI chatbot "should not give age-inappropriate responses that would feel out of place in a PG-13 movie."

Under the new policy, Instagram will automatically restrict teens from following accounts that regularly share adult content. Those who already follow such accounts will no longer be able to see their posts, interact with them, or exchange direct messages. Parents who link their accounts to their teens' will be able to activate an even stricter "Limited Content" mode that filters out additional categories of posts and limits comment activity.

The update follows years of scrutiny from lawmakers, parents, and advocacy groups who argue Meta's safety tools fail to protect minors. An independent review led by former Meta engineer Arturo Béjar and researchers from New York University, Northeastern University, and the UK's Molly Rose Foundation found that 64% of new safety tools on Instagram were ineffective. Béjar said, "Kids are not safe on Instagram." Meta disputed those findings, saying parents already have "robust tools at their fingertips."

Another study released this month by child-advocacy groups found that nearly 60% of 13- to 15-year-olds on Instagram's teen accounts still encountered "unsafe content and unwanted messages" over a six-month period. Meta called the research biased and said it overlooked teens who have positive experiences on the platform.

Instagram's PG-13 restrictions will roll out first in the United States, United Kingdom, Australia, and Canada before expanding to Europe and other markets next year. Meta said its artificial-intelligence systems will help identify users who misrepresent their age to bypass the new settings.