During the third quarter of this year, YouTube removed 7.8 million videos, more than 1.5 million channels and about 224 million comments due to the violation of its community guidelines.
Alike many other tech giants, the Google-owned video site has been facing a lot of international pressure to take stern steps to fight problematic content. The social giant released the aforementioned figures in its latest Transparency Report last week.
The report covered the happenings during July to September 2018, and disclosed that more than 80% of the 7.8 million deleted videos were initially identified by machines and 74.5% of these machine-detected videos didn’t get any views.
The report mentioned that about 72.2% of them belonged to the spam category and were misleading, 10.2% were taken off pertaining to child safety concerns, whereas 9.9% included sexual content and nudity.
YouTube reported that out of the most shocking videos posted in September, including extreme violence and content unsuitable for children, over 90% of them got below ten views, which indicates that the fight against such kind of content ‘is having an impact’.
In a statement, the company mentioned that they have been using a mix of technology and reviews obtained by viewers to deal with the volatile content uploaded on their platform. As per the company, this mixture of smart detection technology and well- trained human reviewers has made constant and quick enforcement of their policies possible. It added that in 2017, they started applying highly advanced machine learning technology to flag or highlight content for review by their teams.
This is the first time ever, when the latest quarterly report released by YouTube includes data about the channels it pulled down. The company said 79.6% of the removed uploads were spam, misleading content and scams, 12.6% contained nudity or sexual content, while 4.5% were pulled down due to child safety reasons.
The company spokespersons added the company has terminated the entire channels if they carried content restricted as per their community guidelines or have even a single egregious violation, such as child sexual exploitation. A spokeswoman mentioned that they are confident that a novel algorithm will work to remove inappropriate content.