Meta is introducing additional safeguards for adult-run Instagram accounts, which feature mainly children, the company announced Wednesday. These accounts are automatically placed in the app's strictest messaging settings to prevent unnecessary messages, enabling the platform's “hidden words” feature that allows you to filter offensive comments. The company is also rolling out new safety features for teen accounts.
Accounts placed in the new, more stringent messaging settings include accounts run by adults who regularly share photos and videos of their children, and accounts run by parents or talent managers who represent their children.
“Though these accounts are used in an overwhelmingly benign way, unfortunately, some people expose them or ask for sexual comments on DMS sexual images,” he wrote in a blog post. “Today we are presenting steps to prevent this abuse.”
Meta says he tries to prevent potentially suspicious adults, such as those already blocked by teenagers. Meta avoids recommending suspicious adults on Instagram for these accounts, and vice versa, making it difficult to find each other in Instagram searches.
Today's announcement comes as Meta and Instagram have taken steps for the past year to address mental health concerns related to social media. These concerns have been raised by the general public and various states of US surgeons, some even requesting parental consent for access to social media.
This change will have a significant impact on family bloggers/creators and parents accounts running “Kidfluencers” accounts. A New York Times survey released last year found that parents were well aware of their child exploitation and even taking part in it by selling photos and clothing that their children were wearing. A survey of accounts run by 5,000 NYT parents found 32 million connections with male followers.
The company says accounts placed in these more stringent settings will show notifications at the top of their Instagram feed, which notifies them that their social networks have updated their safety settings. Notifications also prompt you to check your account's privacy settings.
Meta deleted almost 135,000 Instagram accounts that were sexualized primarily accounts featuring children, and deleted 500,000 Instagram and Facebook accounts associated with the original account they deleted.
Image credits: Meta
In addition to today's announcement, Meta brings new safety features to DMS in teen accounts. This is an app experience with built-in protection for teens that is automatically applied.
Teens are reminded to view new options for viewing safety tips, carefully check their profiles and be aware of what they share. Additionally, you will see the month and year that accounts that join Instagram appear at the top of your new chat. Additionally, Instagram has added new blocks and reporting options that allow users to do both things at the same time.
The new feature is designed to provide more context for the accounts teens are sending messages to and help them find potential scammers, says Meta.
“These new features will remind people to be cautious about private messages and complement safety notifications to block and report anything offensive. “In June alone, they blocked 1 million accounts and reported an additional million after seeing the safety notification.”
Meta also offers a nude protection filter update, noting that 99% of people, including teens, have turned it on. Over 40% of the blurry images received via DM last month remained blurry, the company said.