Advertisement
The report is based on an independent human rights impact assessment (HRIA) commissioned in 2019 by Meta on potential human rights risks in India and other countries related to its platforms.
The project was undertaken by Foley Hoag LLP.
”The HRIA noted the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties, including restrictions of freedom of expression and information; third party advocacy of hatred that incites hostility, discrimination, or violence; rights to non-discrimination; as well as violations of rights to privacy and security of person,” the report said.
Related Articles
Advertisement
The report found that Meta faced criticism and potential reputational risks related to risks of hateful or discriminatory speech by end users.
The assessment also noted a difference between company and external stakeholder understandings of content policies.
”It noted persistent challenges relating to user education; difficulties of reporting and reviewing content; and challenges in enforcing content policies across different languages. In addition, the assessors noted that civil society stakeholders raised several allegations of bias in content moderation. The assessors did not assess or reach conclusions about whether such bias existed,” the report said.
According to the report, the project was launched in March 2020 and it experienced limitations caused by Covid-19, with a research and content end date of June 30, 2021.
The assessment was conducted independently of Meta, the report said.
The HRIA developed recommendations for Meta around implementation and oversight, content moderation, product interventions, etc which Meta is studying and will consider as a baseline to identify and guide related actions, the report said.