YouTube aims to tighten its community guidelines to ban videos that promote superiority of any group as a justification for discrimination against others based on their age, gender, race, caste, religion, sexual orientation, or veteran status, the company said today. The initiative should result in the removal of all videos promoting Nazism and other discriminatory ideologies, and is expected to affect of thousands of channels across YouTube. 

"The openness of YouTube's platform has helped creativity and access to information thrive," the company said in a blog post. "It's our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence." 

On top of this metric, YouTube videos that feature "content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place", will also be removed. 

YouTube doesn't permit most things that a reasonable adult would find inappropriate to see on YouTube. This includes pornography, graphic violence, cyberbullying, spam, threats, impersonation, copyright violations, or anything that puts someone in danger. If you're not sure if a video, channel, or comment is permitted, check out YouTube's community guidelines

Your report will be submitted to YouTube's moderators, who will review the content to check whether it violates the site's guidelines. YouTube says that its moderators work 24/7. Don't worry: All reports are anonymous, so the content owner won't know who reported them. 

YouTube also said it would restrict channels from making money off of videos if they are found to "repeatedly brush up against our hate speech policies." Those channels will not be able to run ads or use Super Chat, which lets channel subscribers pay creators directly for extra chat features. The last change comes after BuzzFeed reported that the paid commenting system had been used to fund creators of videos featuring racism and hate speech

Google first announced a tougher stance on terrorist, hate speech and discriminatory content in 2017, where the internet giant grappled with the reality of operating a free and open system while still being able to monitor it for harmful activity. 

Since then, Google has increasingly tightened its rules on the type of content that is allowed to appear on its platforms, such as YouTube, and heightened its efforts in moderating such media.