Facebook estimates 0.1% of content viewed on social network is hate speech

While we are constantly improving our AI tools they are far from perfect' CTO Mike Schroepfer said     Facebook

While we are constantly improving our AI tools they are far from perfect' CTO Mike Schroepfer said Facebook

Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which was proactively identified, compared to 22.5 million in the previous quarter.

Facebook has disclosed that out of every 10,000 content views in the third quarter of this year, 10 to 11 included hate speech.

Over the course of Q3, Facebook said it took action on 22.1m pieces of hate speech content, 95pc of which was spotted by the company before it was reported by a user.

Though Facebook has been at the centre of hate speech leading to real life ethnic violence in many places around the globe, it is clear the company is not keeping up, either because it can not or because it does not prioritise the global communities' wellbeing.

The release comes with Facebook under rising pressure from governments and activists to crack down on hateful and abusive content while keeping its platform open to divergent viewpoints.

And while strides have been made in proactive detection of hate speech, the platform still has a lot of work to do. On Instagram, it took action on 4.1 million pieces of violent and graphic content.

"This is really sensitive content". Now Facebook is working on models that can be trained in real time to quickly recognize wholly new types of toxic content as they emerge on the network. "If that is so, it is time to fundamentally change the way that the work is organized". "The idea of moving to an online detection system optimized to detect content in real time is a pretty big deal", he said. It's a tricky, never-ending task to remove objectionable user posts and ads, in part because people are uniquely good at understanding what differentiates, say, an artistic nude painting from an exploitative photo or how words and images that seem innocent on their own can be hurtful when paired. Memes are typically clever or amusing combinations of text and imagery, and only in the combination of the two is the toxic message revealed, he said.

Facebook gave a small sneak-peak to all the users regarding the level of the fight put up against the misinformation spread during the U.S. presidential election.

Those that do come into the office, they said, should be given hazard pay. For example, the company banned political ads in the week before and after the election, for example, and recently announced that it would continue the ban on those ads until further notice.

The letter demands Facebook to take more responsibility for their actions by putting their employees' health as a priority and continuing working at home.

The report found 0.11 percent of all views on the platform are identified as that of hate speech.

At the start of the pandemic, Facebook asked content moderators to work mostly from home. That's not all, just a few weeks ago, while other company employees were told to work from home until the middle of 2021, moderators have been called back to offices where they have been exposed to COVID-19.

On Tuesday, MarkZuckerberg appeared before Congress to discuss Facebook's response to misinformation published on its platform before and after the election.

Twitter CEO Jack Dorsey also participated in this hearing. Foxglove said in a tweet that it's the "biggest joint global effort of Facebook content moderators yet".

So, what do you think about the open letter submitted by Facebook employees and the statement from Facebook after this letter?

Hawks beat Cards in NFC West showdown
Black Panther 2 to Begin Shooting in July 2021