The internet can often be a very toxic place. People may express their opinions on something that you do or simply on the way you live your life in a manner that’s just offensive. Cyberbulling continues to be a problem as well and on an image-first social network like Instagram, it’s not going away anytime soon. Instagram continues to take steps to prevent bullying and offensive comments on its platform and it’s now relying on AI to further prevent that.
It has started rolling out a new feature powered by AI which will notify people if their comment may be considered offensive before it’s posted. This intervention will give them the chance to reflect and undo their comment. Instagram says that early tests of this feature show that it does encourage people to share something less hurtful once they have had a chance to reflect.
After hearing from users that they may be reluctant to block, unfollow or report their bully because it could escalate the situation, particularly if they interact with that person in real life, it will now allow users to essentially shadow ban their bullies. It will test this new feature called Restrict.
Once someone has been restricted, their comments on the user’s posts will only be visible to that person. Users will have the option to make a restricted person’s comments visible to others if they so desire. Restricted people also won’t be able to see when the user is active on Instagram or that they have read their direct messages.
Filed in Artificial Intelligence and Instagram. Source: instagram-press
. Read more about