Instagram has begun testing artificial intelligence to identify underage users who falsify their birthdates when signing up for the platform, parent company Meta announced on Monday.
While Meta has already been using AI to estimate users’ ages, the photo and video-sharing app is now applying the technology more proactively to detect accounts that likely belong to teenagers—even if a false birthdate was entered.
If a user is found to have misrepresented their age, their account will be automatically reclassified as a teen account, which comes with more restrictions than adult accounts. These include default private settings, limited direct messaging—only from people the teen follows or is already connected with—and reduced exposure to sensitive content, such as videos of fights or posts promoting cosmetic procedures.
In addition, Instagram will send alerts to teens who spend more than 60 minutes on the app, and a new “sleep mode” will activate from 10 p.m. to 7 a.m., muting notifications and sending automatic replies to direct messages.
Meta explained that its AI is trained to detect signals such as the type of content the account interacts with, profile details, and account creation date to better estimate a user’s real age.
These enhanced measures come amid growing concern over the impact of social media on young people’s mental health. At the same time, several U.S. states are pushing for age verification laws, although many have faced legal challenges.
Meta and other tech companies have voiced support for shifting the responsibility of age verification to app stores, responding to criticism that platforms don’t do enough to keep children under 13 from accessing their services.
As part of its new efforts, Instagram will also send notifications to parents with information on how to talk to their teens about the importance of providing accurate age information online.