In January, the Oversight Board published decisions on their first set of cases, which we immediately implemented. The board also published recommendations covering 17 areas where Facebook could improve its content moderation.
In addition to the Oversight Board’s binding rulings on content, we are committed to consider its recommendations and communicate transparently about actions taken. Today, we are committing to action to address 11 of the board’s recent recommendations. In several of these instances, we have already acted on the board’s recommendations, while in others, we are committing to what was recommended by the board, or going further. We also are assessing the feasibility of five more of the recommendations and will provide updates in the future.
There is one remaining recommendation that we disagree with and will not be taking action on since it relates to softening our enforcement of COVID-19 misinformation. In consultation with global health authorities, we continue to believe our approach of removing COVID-19 misinformation that might lead to imminent harm is the correct one during a global pandemic.
For each recommendation, we have provided detailed responses and will continue to update as we make progress on our commitments. Some of the actions we are taking include:
- Providing more transparency.
- Consolidating and clarifying health misinformation policies. As announced earlier this month, we have consolidated information about health misinformation in a Help Center article, which we now link to in the Community Standards. We’ve also clarified our health misinformation policy as part of a larger COVID-19 update, including adding more details on our rules and giving examples of the type of false claims we will remove.
- Updating Instagram policies. We’ve updated Instagram’s policy on nudity to clarify that health-related nudity is permitted. We will also undertake a more comprehensive update to reflect all the policies we enforce on Instagram today, and give people more information on the relationship between Facebook’s Community Standards and Instagram’s Community Guidelines.
- Launching a Transparency Center. We’ve been working on a new Transparency Center which we expect to launch in the coming months. It will be a destination for people to get more information about our Community Standards and how we enforce them.
- Explaining key terms. We will look for the best way to explain key terms in our Community Standards and share more information about our Dangerous Individuals and Organizations policy.
- Carefully calibrating our use of automation.
- Improving automated detection. We are always improving our automation technology and will continue to refine our machine learning models so they’re better at detecting the kinds of nudity we do allow. This includes improving computer vision signals, sampling more training data for our machine learning, and, when we’re not as confident about the accuracy of our automation, ensure people review the content.
- Exploring when people and technology should be used for review and appeals. Technology allows us to detect and remove harmful content before people report it, sometimes before people see it. We typically launch automated removals when they are at least as accurate as those by content reviewers. We’ll continue to evaluate which kind of reviews or appeals should be done by people and which can be safely handled by automated systems, and how best to provide transparency about how decisions were made.
- Transparency around automation. We will test the board’s recommendation to tell people when their content is removed by automation.
- Continuing to evaluate our COVID-19 policies.
- Continually evaluating tools. We’ll continue to assess and develop a range of tools to address health misinformation, considering the least intrusive to expression wherever possible.
- Removing based on consultation with experts. We’ll keep looking to leading scientists, including from the World Health Organization and other public health authorities, to tell us what is likely to contribute to imminent physical harm.
Implementing the Board’s Decisions At Scale
We’ve also started the process of reinstating identical content with parallel context in the following cases: Uyghur Muslims, Hydroxychloroquine, Azithromycin and COVID-19, and a Nazi Quote. These actions will affect not only content previously posted on Facebook and Instagram but also future content.
For cases where the board upholds our final judgment, we will continue to ensure identical content with parallel context remains either up or down in line with the board’s decision.
When we created the Oversight Board, we hoped its impact would come not just from its decisions on individual cases, but also from broader recommendations on how we can improve our policies and practices. This is the start of that process.
The board deals with some of the trickiest content moderation issues Facebook faces, where there are often no easy decisions. We want this process to be as open and transparent as possible, which is why we are responding to every one of their recommendations in detail here.
A summary of our response to the recommendations made in the Breast Cancer Symptoms and Nudity case is below. Please see the overall detailed response here or the case Newsroom post for additional details.
A summary of our response to the recommendations made in the Hydroxychloroquine, Azithromycin and COVID-19 case is below. Please see the overall detailed response here or the case Newsroom post for additional details.