The report claims that these companies have been actively removing extremist videos from their platforms. Apparently they have been deploying systems that are able to detect and take down videos that are related to the Islamic State, or videos that are similar in nature. It is also capable of detecting when the videos are reposted and will attempt to prevent that from happening.
Unsurprisingly the companies involved did not comment on the system in place, like how it works and how it is used, but some have speculated that it might involve checking the videos against a database of previously flagged videos that have already been identified as “extremist” in nature, although this is just speculation for now.
If the reports are true, it is interesting to see how this might work against previous studies which suggests that sometimes a spike in views for extremist videos could be a sign that an attack is about to take place.