Model-driven Abusive Language Detection
Abusive user-posted content has become a serious issue for many different forms of online communication, especially communication on social media platforms. While the effects of abusive content on society and governments are still being determined, we know that comments containing abusive language can damage lives and put targets of abuse at risk. To address this, we have developed a model-driven approach to abusive language detection. Our approach is based on motivators that are likely to lead to someone posting abusive content. Using the motivators as intermediate constructs of abusive language we develop several models based on othering, subjectivity, moods, and emotions. This model-driven approach first considers the processes that lead to a person posting an abusive comment and constructs a predictor based on each of the models. Each individual predictor is combined into a stacked predictor that is able to detect abusive language with an accuracy of 95% and an F1-score of 91. In addition to a strong abusive language detection technique, we explore how each intermediate construct contributes to our predictor and its role in abusive language.
URI for this recordhttp://hdl.handle.net/1974/26252
Request an alternative formatIf you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre
The following license files are associated with this item: