Instagram announces new AI tools to stop ‘suspicious’ adults from messaging children


Facebook has announced new updates for Instagram in order to keep young people safer on the platform, including making it more difficult for adults to direct message teenagers, encouraging private accounts, and safety notices in Instagram’s direct messages.

Among the new features include blocking adults from sending messages to people under 18 who they are not already following; the adult user will then receive a notification from Instagram that direct messaging is not available.

“This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up. As we move to end-to-end encryption, we’re investing in features that protect privacy and keep people safe without accessing the content of DMs,” Instagram said.

Prompts will also be sent to teenagers encouraging them to be cautious in conversations in adults, with Instagram saying it will notify young users “when an adult who has been exhibiting potentially suspicious behaviour is interacting with them in DMs”.

(Instagram )

This data is gathered and analysed by artificial intelligence, the company explains, that can tell if an adult is sending a large number of friend or message requests to people under the age of 18, for example.

Instagram will also make it more difficult for adults who have been exhibiting suspicious behaviour to find teenagers’ accounts by blocking them coming up in ‘Suggested Users’ or being shown in Instagram’s Explore or the Reels (its TikTok clone) tab.

Finally, Instagram will encourage young people to have private accounts by sending them a notification “on highlighting the benefits of a private account and reminding them to check their settings”.

More features and additional privacy settings are apparently coming in the next few months, Instagram said.

In the last two years, Facebook-owned apps (Facebook, Messenger, Instagram, WhatsApp) and Snapchat were used in more than 70 per cent of instances, and Instagram specifically was used in more than a quarter of cases.

Where age was provided, one in five victims were aged 11 or under – a clear violation of Instagram’s minimum age requirements that the company is hoping to address – with mentions of Instagram doubling in reports.



Source link

Related posts