Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
T5 for Hate Speech, Augmented Data, and Ensemble
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0002-5582-2031
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0001-7924-4953
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0002-5922-7889
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0002-6756-0147
Show others and affiliations
2023 (English)In: Sci, E-ISSN 2413-4155, Vol. 5, no 4, article id 37Article in journal (Refereed) Published
Abstract [en]

We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency.

Place, publisher, year, edition, pages
MDPI , 2023. Vol. 5, no 4, article id 37
Keywords [en]
hate speech, NLP, T5, LSTM, RoBERTa
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
URN: urn:nbn:se:mau:diva-75691DOI: 10.3390/sci5040037Scopus ID: 2-s2.0-85180673806OAI: oai:DiVA.org:mau-75691DiVA, id: diva2:1955476
Note

Godkänd;2023;Nivå 0;2023-11-13 (joosat);

Part of special issue: Computational Linguistics and Artificial Intelligence

CC BY 4.0 License

Available from: 2025-04-30 Created: 2025-04-30 Last updated: 2025-05-07Bibliographically approved

Open Access in DiVA

fulltext(2837 kB)15 downloads
File information
File name FULLTEXT01.pdfFile size 2837 kBChecksum SHA-512
758bc392cf662ff464477de75f4b9ffe0a1aab8046f0e36f5673362bedf4ff75ee95b279fab421f868fc3518915ffae10ec142e0d027dec694bd189a764c20be
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusFulltext

Search in DiVA

By author/editor
Adewumi, OluwatosinSabry, Sana SabahAbid, NosheenLiwicki, FoteiniLiwicki, Marcus
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 19 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 63 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf