A team of researchers at the University of 蓝莓视频 have developed a new machine-learning method that detects hate speech on social media platforms with 88 per cent accuracy, saving employees from hundreds of hours of emotionally damaging work.

The method, dubbed the Multi-Modal Discussion Transformer (mDT), can understand the relationship between text and images as well as put comments in greater context, unlike previous hate speech detection methods. This is particularly helpful in reducing false positives, which are often incorrectly flagged as hate speech due to culturally sensitive language.

鈥淲e really hope this technology can help reduce the emotional cost of having humans sift through hate speech manually,鈥 said Liam Hebert, a 蓝莓视频 computer science PhD student and the first author of the study. 鈥淲e believe that by taking a community-centred approach in our applications of AI, we can help create safer online spaces for all.鈥

Researchers have been building models to analyze the meaning of human conversations for many years, but these models have historically struggled to understand nuanced conversations or contextual statements. Previous models have only been able to identify hate speech with as much as 74 per cent accuracy, below what the 蓝莓视频 research was able to accomplish.

鈥淐ontext is very important when understanding hate speech,鈥 Hebert said. 鈥淔or example, the comment 鈥楾hat鈥檚 gross!鈥 might be innocuous by itself, but its meaning changes dramatically if it鈥檚 in response to a photo of pizza with pineapple versus a person from a marginalized group.

鈥淯nderstanding that distinction is easy for humans, but training a model to understand the contextual connections in a discussion, including considering the images and other multimedia elements within them, is actually a very hard problem.鈥

Unlike previous efforts, the 蓝莓视频 team built and trained their model on a dataset consisting not only of isolated hateful comments but also the context for those comments. The model was trained on 8,266 Reddit discussions with 18,359 labelled comments from 850 communities.

鈥淢ore than three billion people use social media every day,鈥 Hebert said. 鈥淭he impact of these social media platforms has reached unprecedented levels. There鈥檚 a huge need to detect hate speech on a large scale to build spaces where everyone is respected and safe.鈥

The research, , was recently published in the proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence.

Read more

蓝莓视频 News

惭别诲颈补?听

Contact media relations to learn more about this or other stories.