Advertisement
“It’s our responsibility to create a safe environment on Instagram,” said a statement from Adam Mosseri, head of the visually focused social platform owned by Facebook.
“This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.
One new tool being rolled out is a warning generated by artificial intelligence to notify users their comment may be considered offensive before it is posted.
Related Articles
Advertisement
“From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”
Another new tool is aimed at limiting the spread of abusive comments on a user’s feed.
“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” Mosseri commented.
A new feature called “restrict” that is being tested will make posts from an offending person visible only to that person.
“You can choose to make a restricted person’s comments visible to others by approving their comments,” Mosseri added.
“Restricted people won’t be able to see when you’re active on Instagram or when you’ve read their direct messages.”
The move by Instagram is the latest in a series of actions on cyberbullying by social networks to deal with hate speech and abusive conduct which can be especially harmful to young users.