YouTube Will Send a Notification to Users If Their Comment is Abusive

Toxic and hateful comments on YouTube have been a constant headache for the company, creators and users. The company has previously attempted to curtail this by introducing features such as showing an alert to individuals at the time of posting so that they could be more considerate. Now, the streaming service is introducing a new feature that will more aggressively nudge such individuals of their abusive comments and take broader actions.

YouTube says it will send a notification to people whose abusive comments have been removed for violating the platform’s rules. If despite receiving the notification a user continues to post abusive comments, the service will ban them from posting any more comments for 24 hours. The company said it tested the feature before the rollout today and found that notifications and timeouts proved materially successful.

At the moment, the hateful comment detection is available only for English-language comments, but the streaming service aims to include more languages in the future. Notably, the pre-posting warning is available for English and Spanish.

“Our goal is to both protect creators from users trying to negatively impact the community via comments, as well as offer more transparency to users who may have had comments removed to policy violations and hopefully help them understand our Community Guidelines,” the company said.

If a user thinks that their comment has been wrongfully removed, they can share their feedback. The company, however, didn’t say if it will restore the comments after looking at the feedback.

Additionally, in a forum post, YouTube said that it has been working on improving its AI-powered detection systems. It has removed 1.1 billion “spammy” comments in the first half of 2022, the company claimed. YouTube has also enhanced its system to better detect and remove bots in live chat videos, it said.

YouTube and other social networks have been able to reduce spam and abusive content in part by relying on automated detection. However, abusers often use different slang or misspell words to trick the system. What’s more, it’s harder to catch people posting hateful comments in non-English languages.

The streaming company has tested a wide-range of tools in recent quarters to reduce hateful comments on the platform. These tests included hiding comments by default and showing a user’s comment history on their profile cards.

Last month, Youtube rolled out a feature that let creators hide a particular user from comments. This control applies to the whole channel, so even if the user posts hateful comments on another video, it won’t show up.

Platforms globally are grappling with the issue of curtailing the spread of hateful comments.

Instagram was a breeding ground for them when England footballers Bukayo Saka, Marcus Rashford, and Jadon Sancho were harassed for missing penalty kicks in last year’s Euro finals. A new report from GLAAD and Media Matters noted that Anti-LGBTQ slurs have skyrocketed after Elon Musk took over Twitter. While all these platforms have rolled out tools to mute or hide comments and restrict comments to certain folks, the amount of hateful and abusive comments remain a massive scale problem.

Source: Tech Crunch

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button