Instagram to Start Warning Users Before They Post ‘Potentially Offensive’ Captions
Starting Monday, Instagram is starting to warn users before they post ‘potentially offensive’ captions on a photo or video. If an Instagram user posts something that the service’s AI-powered tools think could be hurtful, the app will generate a notification to say that the caption “looks similar to others that have been reported.” The app will then encourage the user to edit the caption, but it will also give them the option of posting it unchanged.
The new feature builds upon a similar tool that Instagram introduced for comments back in July. The company says that asking people to reconsider posting potentially hurtful comments has had promising results in the company’s fight against online bullying. Instagram says the new feature is rolling out in select countries for now, but it will expand globally in the upcoming months.
This feature is the latest in a series of measures that Instagram has been taking to address bullying on its platform. In October, the company launched a new “restrict” feature that lets users shadow ban their bullies. Last year, it started using AI to filter offensive comments and proactively detect bullying in photos and captions.
Unlike its other moderation tools, Instagram is relying on users to spot when one of their comments crosses the line. It’s unlikely to stop the platform’s more determined bullies, but hopefully it can protect people from thoughtless insults.