Advertisement
Facebook’s photo sharing platform, Instagram took action against about 2.8 million pieces across nine categories during the same period proactively.
The company said it had received 1,504 user reports for Facebook and 265 reports for Instagram through its Indian grievance mechanism between June 16-July 31, and the social media company had responded to all of them.
A Facebook spokesperson said over the years, the company has consistently invested in technology, people and processes to keep users safe and secure online and enable them to express themselves freely on its platform.
Related Articles
Advertisement
This report contains details of the content that have been removed proactively using automated tools and details of user complaints received and action taken, the spokesperson noted.
In its report, Facebook said it had “actioned” over 33.3 million pieces of content across ten categories during June 16-July 31, 2021.
This includes content related to spam (25.6 million), violent and graphic content (3.5 million), adult nudity and sexual activity (2.6 million), and hate speech (3,24,300).
Other categories under which content was actioned include bullying and harassment (1,23,400), suicide and self-injury (9,45,600), dangerous organisations and individuals: terrorist propaganda (1,21,200), and dangerous organisations and individuals: organised hate (94,500).
”Actioned” content refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for violation of standards. Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning.
The proactive rate, which indicates the percentage of all content or accounts acted on which Facebook found and flagged using technology before users reported them, in most of these cases ranged between 86.8-99.9 per cent.
The proactive rate for removal of content related to bullying and harassment was 42.3 per cent as this content is contextual and highly personal by nature. In many instances, people need to report this behaviour to Facebook before it can identify or remove such content.
Under the new IT rules, large digital platforms (with over 5 million users) will have to publish periodic compliance reports every month, mentioning the details of complaints received and action taken thereon. The report is to also include the number of specific communication links or parts of information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools.
In the May 15-June 15 period, Facebook had “actioned” over 30 million content pieces across ten violation categories while Instagram had taken action against about two million pieces across nine categories during the same period.
For Instagram, 2.8 million pieces of content were actioned across nine categories during June 16-July 31 period. This includes content related to suicide and self-injury (8,11,000), violent and graphic content (1.1 million), adult nudity and sexual activity (6,76,100), and bullying and harassment (1,95,100).
Other categories under which content was actioned include hate speech (56,200), dangerous organisations and individuals: terrorist propaganda (9,100), and dangerous organisations and individuals: organised hate (5,500).
Between June 16 and July 31, Facebook received 1,504 reports through its Indian grievance mechanism. “Of these incoming reports, we provided tools for users to resolve their issues in 1,326 cases. These include pre-established channels to report content for specific violations, self-remediation flows where they can download their data, avenues to address account hacked issues etc,” it said.
During the same time frame, Instagram received 265 reports through the Indian grievance mechanism, and provided tools for users to resolve their issues in 181 cases, it added.
Earlier in the day, Google said it received 36,934 complaints from users and removed 95,680 pieces of content based on those complaints, and took down 5,76,892 pieces of content in July as a result of automated detection in the month of July.