Addressing Challenges in Hate Speech Detection using BERT-based Models: A Review




Hate Speech Detection, BERT, Feature Extraction, Finetuning, challenges


 The rapid growth of social media platforms has led to an increase in hate speech. This has prompted
the development of effective detection mechanisms that aim to mitigate the potential hazards and threats it poses to
society. BERT (Bidirectional Encoder Representations from Transformers) has produced cutting-edge results in
this field. This review paper aims to identify and analyze the whole process of using the BERT model to tackle the
challenges associated with the hate speech detection problem. This academic discussion will begin by addressing
the training datasets and the preprocessing methods involved. Subsequently, the use of the BERT model will be
explored, followed by an examination of the contributions made to address the issues encountered. Finally, we will
discuss the evaluation phase. The use of BERT included the application of two primary approaches. In the featurebased approach, BERT accepts textual input and generates its corresponding representation as output. The resulting
output is then used as input for any classification model. The second approach involves the process of fine-tuning
BERT using labeled datasets and then employing it directly for classification purposes. The controversial issues
and open challenges that appeared at each stage were discussed. The results indicate that in both approaches,
BERT has shown its efficacy relative to other models under contention. However, there is a need for greater
attention and advancement to effectively solve the existing issues and constraints in the future.


Download data is not yet available.




How to Cite

J. Aljawazeri and M. N. Jasim, “Addressing Challenges in Hate Speech Detection using BERT-based Models: A Review ”, Iraqi Journal For Computer Science and Mathematics, vol. 5, no. 2, pp. 1–20, Mar. 2024.
DOI: 10.52866/ijcsm.2024.05.02.001
Published: 2024-03-15