Advertising dollars are the bread and butter for Facebook and other social media companies.
But the tech firm is drawing a clearer line when it comes to profiting from tragedy, fake news and other offensive content.
On Wednesday, Facebook released new rules that spell out what type of content publishers and creators cannot make money from on the social media site.
Get tech news in your inbox weekday mornings. Sign up for the free Good Morning Silicon Valley newsletter.
The tech firm said that publishers that depict misappropriation of children’s characters, tragedy and conflict, “debated social issues,” violence, nudity, drug use, overly graphic images and derogatory language and other offensive content may not be eligible to make money from ads.
If these publishers and creators run afoul of these rules multiple times they could lose access to all of Facebook’s monetization features.
After the 2016 presidential election, tech firms including Facebook were criticized for not doing enough to stop the spread of fake news and misinformation.
Nick Grudin, Facebook’s VP of Media Partnerships, said in a blog post on Wednesday that those who share clickbait, sensationalism, fake news and misinformation could also be ineligible from making money through the social media site.
Lawmakers and the public have also called on tech firms to do more to crack down on terrorism, violence and other offensive content on the platform.
Some companies also started pulling advertisements from Google-owned YouTube this year after they discovered that their ads were appearing next to offensive or hateful content.
Facebook said it’s also releasing new tools so advertisers know what publishers ran their ads.
“With regards to brand safety, generally, people who view content in News Feed implicitly understand that the individual posts they see are not connected to or endorsed by the other posts in their feed—from brands or anyone else,” wrote Carolyn Everson, Facebook’s VP Global Marketing Solutions in a blog post. “That being said, content adjacency might still be a concern for other ad placements in which the disconnect between content and advertisement may not be as clear.”
Every week, millions of pieces of content are flagged on Facebook for possibly violating its online rules, she wrote. The company removes the content if it violates these standards.
“Keeping our community safe is critical to our mission, and there is absolutely no place on Facebook for hate speech or content that promotes violence or terrorism,” Everson wrote.
Photo by Associated Press
Tags: ads, facebook, fake news