Advertisement
News

How To Report Hate Online

You just saw something on your feed that was offensive, potentially harmful, or straight-up terrible. Here's what to do about it.
How To Report Hate Online

Photo, iStock/Milkos

After a gunman murdered 50 worshippers (and injured an additional 50) at a mosque on March 15 in Christchurch, New Zealand, it became obvious how much of his hatred was fuelled by his participation in online forums. But long before this incredibly tragic event, it had already become clear that social media platforms need to do more to combat anti-Muslim sentiment online.

Now, in response to intensified calls for platforms to step up, Facebook has just made a landmark announcement that it would combat hate by banning white nationalism and separatism on both Facebook and Instagram, beginning the week of March 31. (In other words, when human or artificial intelligence moderators identify these types of posts, they'll be taken down.) Both social networks will also serve up support resources to people who search terms related to white supremacy.

In case you had doubts about how normalized hate is, especially online, Facebook revealed that nearly 200 people viewed the Christchurch gunman’s livestreaming video but no one reported it until 12 minutes after it was over. If that horrifies you—as it should—here's how to take action next time you run into something awful online.

How can I report bad content, and will my reports do anything?

Social media platforms have specific rules about what they will and will not take down, because (a) they want more users, more content and more time spent on their platforms, and, (b), they don’t want to violate free speech. However, they do need to follow hate speech laws in every country in which their platform operates, and each platform has its own policies and guidelines that users must follow. If a post or account is found to violate those specific terms, platforms will take them down. But artificial intelligence and human moderators can’t catch everything.

Often platforms will only take down content that directly incites hate, which is why it’s still so easy for misinformation to spread. So yes, your reports are important, but you can make them more effective by being as specific as possible. Here's how to do so on each major platform.

Facebook

As the world's most popular social media platform, Facebook has received the most scrutiny over its content moderation , and as a result, has a pretty comprehensive and simple-to-use reporting system based on its community standards. Whether you’re looking at a hateful post, page, group or profile, look for the option to “give feedback on” or “report” it. While giving feedback doesn’t sound as serious, it does help Facebook’s AI systems and moderators know what types of content to look at and helps them draw patterns for future reference. You’ll get a series of options for reporting violent and hateful content such as hate speechviolenceterrorism and even false news. That last option—unique to Facebook—is critical because misinformation can often lead to serious consequences. (Take for example, Pizzagate, in which fabricated reports about a child sex ring led a man to fire an assault rifle in a Washington, D.C. pizza shop ).

After you choose one of the listed reasons for flagging the content to Facebook, the platform will suggest other steps you can take—blocking an account, hiding posts from a page, or messaging the page to resolve the issue. For some reports (Facebook doesn't specify which kind), you will have the option to check your "support inbox" for a status update. 

Advertisement

In the case of seeing a dangerous livestream—whether it involves a crime or someone considering self-harm—report the video to Facebook ASAP using the "report" function, and indicate that someone is in harm's way. If you know where the event is taking place in, call the local police as you would if you had seen it happening in the physical world. 

Instagram

Although Facebook owns Instagram, its reporting system is far less less helpful (though hopefully this is set to change).

To report something, hit the three dots on the top right corner of a post and click on the "report" option at the bottom. Choose "report for spam" if the account looks fake (a.k.a. a bot), or flag the post as “inappropriate.” Instagram will eventually let you know if they've taken action.

However, unless there is a direct display of racist language or symbols, or direct calls to violence, IG has generally allowed posts to stay up (contrast this to how quickly they take down those that allegedly violate their nudity policies). The new policy should change that.

There is still no way to report false information on Instagram. In fact, on Instagram’s pages about reporting, under the section for “Hate Accounts,” users are merely redirected to its section on “harassment or bullying.”

Twitter

Advertisement

When you click on the down-facing arrow at the top right of a tweet, you can click "report" and the platform will give you options: if you’re not interested in the tweet, if it is suspicious or spam, if it displays a sensitive image, or if it is abusive or harmful.

If you choose "abusive or harmful," you can also note whether the tweet is "disrespectful or offensive." If you choose this latter option, you'll receive a message from the social platform that apologizes that you were offended and suggests you mute or block the account in question.

After flagging a tweet as offensive, it's best to take things a step further and explain more specifically what the issue is. For example, if the tweet is promoting violence towards a group of people or spreading rumours about a group to make them look bad, you can try reporting for “directing hate against a protected category or threatening violence or physical harm.” From there, you have the option to add up to five other tweets to back your claim.

YouTube

Hit the three grey dots under the bottom corner of a video (which stands for “more”) and click on "report." From there, YouTube will let you report the video for breaking its community guidelines—which ban posts containing sexual content, violent or repulsive content, hateful or abusive content, harmful or dangerous acts, child abuse, promoting terrorism, spam or misleading content, infringing your rights (for legal claims, such as copyright), or simply a captioning issue. The more specific your report, the better. The platform will ask you to give a timestamp for the offending material, and you can type out your complaint in greater detail.

Note that YouTube defines hateful content as promoting or condoning violence against people based on their race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation/gender identity, “or whose primary purpose is inciting hatred” based on the aforementioned characteristics. But sometimes hateful content is difficult to report if it doesn’t have specific indications of violence. For example, content vaguely targeting refugees or immigrants usually has obvious underlying tones of racism, but without directly specifying hate towards a specific race, your report might not get very far.

GET CHATELAINE IN YOUR INBOX!

Subscribe to our newsletters for our very best stories, recipes, style and shopping tips, horoscopes and special offers.

By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Advertisement
Advertisement