Digitala Vetenskapliga Arkivet

Change search
Refine search result
1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizingand Condescending Language2022In: Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) / [ed] Guy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan, Association for Computational Linguistics , 2022, p. 473-478Conference paper (Refereed)
    Abstract [en]

    This paper describes the system used by the Machine Learning Group of LTU in subtask 1 of the SemEval-2022 Task 4: Patronizing and Condescending Language (PCL) Detection. Our system consists of finetuning a pretrained text-to-text transfer transformer (T5) and innovatively reducing its out-of-class predictions. The main contributions of this paper are 1) the description of the implementation details of the T5 model we used, 2) analysis of the successes & struggles of the model in this task, and 3) ablation studies beyond the official submission to ascertain the relative importance of data split. Our model achieves an F1 score of 0.5452 on the official test set.

  • 2.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Habib, Nudrat
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Barney, Elisa
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Instruction Makes a Difference2024In: Document Analysis Systems: 16th IAPR International Workshop, DAS 2024, Athens, Greece, August 30–31, 2024, Proceedings / [ed] Giorgos Sfikas; George Retsinas, Springer Science and Business Media Deutschland GmbH , 2024, p. 71-88Conference paper (Refereed)
    Abstract [en]

    We introduce the Instruction Document Visual Question Answering (iDocVQA) dataset and the Large Language Document (LLaDoc) model, for training Language-Vision (LV) models for document analysis and predictions on document images, respectively. Usually, deep neural networks for the DocVQA task are trained on datasets lacking instructions. We show that using instruction-following datasets improves performance. We compare performance across document-related datasets using the recent state-of-the-art (SotA) Large Language and Vision Assistant (LLaVA)1.5 as the base model. We also evaluate the performance of the derived models for object hallucination using the Polling-based Object Probing Evaluation (POPE) dataset. The results show that instruction-tuning performance ranges from 11x to 32x of zero-shot performance and from 0.1% to 4.2% over non-instruction (traditional task) finetuning. Despite the gains, these still fall short of human performance (94.36%), implying there’s much room for improvement.

  • 3.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Södergren, Isabella
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sabry, Sana Sabah
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bipol: Multi-axes Evaluation of Bias with Explainability in BenchmarkDatasets2023In: Proceedings of Recent Advances in Natural Language Processing / [ed] Galia Angelova, Maria Kunilovskaya and Ruslan Mitkov, Incoma Ltd. , 2023, p. 1-10Conference paper (Refereed)
    Abstract [en]

    We investigate five English NLP benchmark datasets (on the superGLUE leaderboard) and two Swedish datasets for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), Recognising Textual Entailment (RTE), Swedish CB, and SWEDN. Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to estimate and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large Swedish bias-labeled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We make the codes, model, and new dataset publicly available.

  • 4.
    Alkhaled, Lama
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Adewumi, Oluwatosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sabry, Sana Sabah
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bipol: A novel multi-axes bias evaluation metric with explainability for NLP2023In: Natural Language Processing Journal, ISSN 2949-7191, Vol. 4, article id 100030Article in journal (Refereed)
    Abstract [en]

    We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpus-level evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to classify bias using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2) and the WinoBias dataset. As additional contribution, we created a large English dataset (with almost 2 million labeled samples) for training models in bias classification and make it publicly available. We also make public our codes.

    Download full text (pdf)
    fulltext
  • 5.
    Alkhaled, Lama
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Fei, Ng Yee
    Asia Pacific University, Faculty of Computing, Kuala Lumpur, Malaysia.
    Automated Invoice Processing System2023In: 2023 IEEE International Conference on Industrial Engineering and Engineering Management, IEEM 2023, IEEE, 2023, p. 0188-0192Conference paper (Refereed)
  • 6.
    Alkhaled, Lama
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Khamis, Taha
    Faculty of Engineering, University of Malaya, Malaysia.
    Supportive Environment for Better Data Management Stage in the Cycle of ML Process2024In: Artificial Intelligence and Applications, E-ISSN 2811-0854, Vol. 2, no 2, p. 121-128Article in journal (Refereed)
    Abstract [en]

    The objective of this study is to explore the process of developing artificial intelligence and machine learning (ML) applications to establish an optimal support environment. The primary stages of ML include problem understanding, data management (DM), model building, model deployment, and maintenance. This paper specifically focuses on examining the DM stage of ML development and the challenges it presents, as it is crucial for achieving accurate end models. During this stage, the major obstacle encountered was the scarcity of adequate data for model training, particularly in domains where data confidentiality is a concern. The work aimed to construct and enhance a framework that would assist researchers and developers in addressing the insufficiency of data during the DM stage. The framework incorporates various data augmentation techniques, enabling the generation of new data from the original dataset along with all the required files for detection challenges. This augmentation process improves the overall performance of ML applications by increasing both the quantity and quality of available data, thereby providing the model with the best possible input.

    Download full text (pdf)
    fulltext
  • 7.
    Alkhaled, Lama
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Roy, Ayush
    Department of Electrical Engineering, Jadavpur University, India.
    Palaiahnakote, Shivakumara
    Faculty of Computer Science and Information Technology, University of Malaya, Malaysia.
    An Attention-Based Fusion of ResNet50 and InceptionV3 Model for Water Meter Digit Recognition2023In: Artificial Intelligence and Applications, E-ISSN 2811-0854Article in journal (Refereed)
    Abstract [en]

    Digital water meter digit recognition from images of water meter readings is a challenging research problem. One key reason is thatthis might be a lack of publicly available datasets to develop such methods. Another reason is the digits suffer from poor quality. In this work,we develop a dataset, called MR-AMR-v1, which comprises 10 different digits (0–9) that are commonly found in electrical and electronicwater meter readings. Additionally, we generate a synthetic benchmarking dataset to make the proposed model robust. We propose a weightedprobability averaging ensemble-based water meter digit recognition method applied to snapshots of the Fourier transformed convolution blockattention module-aided combined ResNet50-InceptionV3 architecture. This benchmarking method achieves an accuracy of 88% on test setimages (benchmarking data). Our model also achieves a high accuracy of 97.73% on the MNIST dataset. We benchmark the result on thisdataset using the proposed method after performing an exhaustive set of experiments.

    Download full text (pdf)
    fulltext
  • 8.
    Granado, Felipe Macías
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    How GNNs Can Be Used in the Vehicle Industry2024In: Artificial Intelligence and Applications, E-ISSN 2811-0854Article in journal (Refereed)
    Abstract [en]

    Graph Neural Networks (GNNs) have garnered substantial interest across different fields, including the automotive sector, owing to their adeptness in comprehending and managing data characterized by intricate connections and arrangements. Within the automotive realm, GNNs can be harnessed in diverse capacities to elevate effectiveness, safety, and overall operational excellence. This study is centered on the assessment of various Graph Neural Network (GNN) models and their potential performance within the automotive sector, utilizing widely recognized datasets. The objective of the study was to raise awareness among researchers and developers working on vehicle intelligence systems (VIS) about the potential benefits of utilizing Graph Neural Networks (GNNs). This could offer solutions to various challenges in this field, including comprehending complex scenes, managing diverse data from multiple sources, adapting to dynamic situations, and more. The research explores three distinct GNN models named ViG, Point-GNN, and Few-shot GNN. These models were evaluated using datasets such as KITTI, Mini Imagenet, and ILSVRC.

    Download full text (pdf)
    fulltext
  • 9.
    Mokayed, Hamam
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Clark, Thomas
    Department of Computer Engineering, Asia Pacific University, Kuala Lumpur, Malaysia.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Marashli, Mohamad Ali
    Department of Physics, City University of Hong Kong, Hong Kong.
    Chai, Hum Yan
    Department of Mechatronics and Biomedical Engineering, Universiti Tunku Abdul Rahman, Selangor, Malaysia.
    On Restricted Computational Systems, Real-time Multi-tracking and Object Recognition Tasks are Possible2022In: 2022 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), IEEE , 2022, p. 1523-1528Conference paper (Refereed)
    Abstract [en]

    Intelligent surveillance systems are inherently computationally intensive. And with their ever-expanding utilization in both small-scale home security applications and on the national scale, the necessity for efficient computer vision processing is critical. To this end, we propose a framework that utilizes modern hardware by incorporating multi-threading and concurrency to facilitate the complex processes associated with object detection, tracking, and identification, enabling lower-powered systems to support such intelligent surveillance systems effectively. The proposed architecture provides an adaptable and robust processing pipeline, leveraging the thread pool design pattern. The developed method can achieve respectable throughput rates on low-powered or constrained compute platforms.

  • 10.
    Mokayed, Hamam
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Nayebiastaneh, Amirhossein
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sozos, Stergios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Hagner, Olle
    Smartplanes, Jävre, Sweden.
    Backe, Björn
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Challenging YOLO and Faster RCNN in Snowy Conditions: UAV Nordic Vehicle Dataset (NVD) as an Example2024In: 2nd International Conference on Unmanned Vehicle Systems / [ed] Aliya Al-Hashim; Tasneem Pervez; Lazhar Khriji; Muhammad Bilal Waris, IEEE, 2024Conference paper (Refereed)
  • 11.
    Mokayed, Hamam
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Palaiahnakote, Shivakumara
    Department of System and Technology, Faculty of Computer Science and Information Technology, University Malaya, Malaysia.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    AL-Masri, Ahmed N.
    Studies, Research and Development, Ministry of Energy and Infrastructure, UAE.
    License Plate Number Detection in Drone Images2022In: Artificial Intelligence and Applications, E-ISSN 2811-0854Article in journal (Refereed)
    Abstract [en]

    For an intelligent transportation system, identifying license plate numbers in drone photos is difficult, and it is used in practical applications like parking management, traffic management, automatically organizing parking spots, etc. The primary goal of the work that is being presented is to demonstrate how to extract robust and invariant features from PCM that can withstand the difficulties posed by drone images. After that, the work will take advantage of a fully connected neural network to tackle the difficulties of fixing precise bounding boxes regardless of orientations, shapes, and text sizes. The proposed work will be able to find the detected text for both license plate numbers and natural scene images which will lead to a better recognition stage. Both our drone dataset (Mimos) and the benchmark license plate dataset (Medialab) are used to assess the effectiveness of the study that has been done. To show that the suggested system can detect text of natural scenes in a wide variety of situations. Four benchmark datasets, namely, SVT, MSRA-TD-500, ICDAR 2017 MLT, and Total Text are used for the experimental results. We also describe trials that demonstrate robustness to varying height distances and angles. This work's code and data will be made publicly available on GitHub.

    Download full text (pdf)
    fulltext
  • 12.
    Mokayed, Hamam
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Quan, Tee Zhen
    Faculty of Computing, Asia Pacific University, Malaysia .
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sivakumar, V.
    Faculty of Computing, Asia Pacific University, Malaysia .
    Real-Time Human Detection and Counting System Using Deep Learning Computer Vision Techniques2023In: Artificial Intelligence and Applications, E-ISSN 2811-0854, Vol. 1, no 4, p. 221-229Article in journal (Refereed)
    Abstract [en]

    Targeting the current Covid 19 pandemic situation, this paper identifies the need of crowd management. Thus, it proposes an effective and efficient real-time human detection and counting solution specifically for shopping malls by producing a system with graphical user interface and management functionalities. Besides, it comprehensively reviews and compares the existing techniques and similar systems to select the ideal solution for this scenario. Specifically, advanced deep learning computer vision techniques are decided by using YOLOv3 for detecting and classifying the human objects with DeepSORT tracking algorithm to track each detected human object and perform counting using intrusion line judgment. Additionally, it converts the pretrained YOLOv3 into TensorFlow format for better and faster real-time computation using graphical processing unit instead of using central processing unit as the traditional target machine. The experimental results have proven this implementation combination to be 91.07% accurate and real-time capable with testing videos from the internet to simulate the shopping mall entrance scenario.

    Download full text (pdf)
    fulltext
  • 13.
    Mokayed, Hamam
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Ulehla, Christián
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Shurdhaj, Elda
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Nayebiastaneh, Amirhossein
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Hagner, Olle
    Smartplanes, Jävre, 94494 Piteå Municipality, Sweden.
    Hum, Yan Chai
    Mechatronics and Biomedical Engineering, Universiti Tunku Abdul Rahman, Jalan Sungai Long, Bandar Sungai Long, Kajang, Selangor, 43000, Malaysia.
    Fractional B-Spline Wavelets and U-Net Architecture for Robust and Reliable Vehicle Detection in Snowy Conditions2024In: Sensors, E-ISSN 1424-8220, Vol. 24, no 12, article id 3938Article in journal (Refereed)
    Abstract [en]

    This paper addresses the critical need for advanced real-time vehicle detection methodologies in Vehicle Intelligence Systems (VIS), especially in the context of using Unmanned Aerial Vehicles (UAVs) for data acquisition in severe weather conditions, such as heavy snowfall typical of the Nordic region. Traditional vehicle detection techniques, which often rely on custom-engineered features and deterministic algorithms, fall short in adapting to diverse environmental challenges, leading to a demand for more precise and sophisticated methods. The limitations of current architectures, particularly when deployed in real-time on edge devices with restricted computational capabilities, are highlighted as significant hurdles in the development of efficient vehicle detection systems. To bridge this gap, our research focuses on the formulation of an innovative approach that combines the fractional B-spline wavelet transform with a tailored U-Net architecture, operational on a Raspberry Pi 4. This method aims to enhance vehicle detection and localization by leveraging the unique attributes of the NVD dataset, which comprises drone-captured imagery under the harsh winter conditions of northern Sweden. The dataset, featuring 8450 annotated frames with 26,313 vehicles, serves as the foundation for evaluating the proposed technique. The comparative analysis of the proposed method against state-of-the-art detectors, such as YOLO and Faster RCNN, in both accuracy and efficiency on constrained devices, emphasizes the capability of our method to balance the trade-off between speed and accuracy, thereby broadening its utility across various domains.

    Download full text (pdf)
    fulltext
  • 14.
    Mudhalwadkar, Nikhil Prashant
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Shivakumara, Palaiahnakote
    Centre of Image and Signal Processing, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia.
    Hum, Yan Chai
    Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Petaling Jaya, Malaysia.
    Anime Sketch Colourization Using Enhanced Pix2pix GAN2023In: Pattern Recognition: 7th Asian Conference, ACPR 2023, Kitakyushu, Japan, November 5–8, 2023, Proceedings Part I / [ed] Huimin Lu; Michael Blumenstein; Sung-Bae Cho; Cheng-Lin Liu; Yasushi Yagi; Tohru Kamiya, Springer Nature, 2023, Vol. 1, p. 148-164Conference paper (Refereed)
  • 15.
    Pagliai, Irene
    et al.
    University of Göttingen, Germany.
    van Boven, Goya
    Utrecht University, the Netherlands.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Gurung, Namrata
    QualityMinds GmbH, Germany.
    Södergren, Isabella
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Barney, Elisa
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Data Bias According to Bipol: Men are Naturally Right and It is the Role ofWomen to Follow Their Lead2024In: Proceedings of the 7th International Conference on Natural Language and Speech Processing (ICNLSP-2024) / [ed] Mourad Abbas; Abed Alhakim Freihat, Association for Computational Linguistics , 2024, p. 34-46, article id 2024.icnlsp-1.5Conference paper (Refereed)
  • 16.
    Saleh, Yahya Sherif Solayman Mohamed
    et al.
    Faculty of Computing, Engineering and Technology, Asia Pacific University, Kuala Lumpur, Malaysia.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Nikolaidou, Konstantina
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Hum, Yan Chai
    Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long, Malaysia.
    How GANs assist in Covid-19 pandemic era: a review2024In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 83, no 10, p. 29915-29944Article, review/survey (Refereed)
  • 17.
    Wang, Jiayi
    et al.
    University College London, UK.
    Adelani, David Ifeoluwa
    University College London, UK; Masakhane NLP.
    Agrawal, Sweta
    University of Maryland, USA.
    Masiak, Marek
    University College London, UK.
    Rei, Ricardo
    Unbabel; Instituto Superior Técnico; INESC-ID.
    Briakou, Eleftheria
    University of Maryland, USA.
    Carpuat, Marine
    University of Maryland, USA.
    He, Xuanli
    University College London, UK.
    Bourhim, Sofia
    ENSIAS, Morocco.
    Bukula, Andiswa
    SADiLaR, South Africa.
    Mohamed, Muhidin
    Aston University, UK.
    Olatoye, Temitayo
    University of Eastern Finland, Finland.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mwase, Christine
    Fudan University, China.
    Kimotho, Wangui
    Masakhane NLP.
    Yuehgoh, Foutse
    Conservatoire National des Arts et Métiers, France.
    Aremu, Anuoluwapo
    Masakhane NLP.
    Ojo, Jessica
    Masakhane NLP; Lelapa AI, South Africa.
    Muhammad, Shamsuddeen Hassan
    Masakhane NLP; Imperial College London, UK; HausaNLP.
    Osei, Salomey
    Masakhane NLP; University of Deusto, Spain.
    Omotayo, Abdul-Hakeem
    Masakhane NLP; University of California, USA.
    Chukwuneke, Chiamaka
    Masakhane NLP; Lancaster University, UK.
    Ogayo, Perez
    Masakhane NLP.
    Hourrane, Oumaima
    Masakhane NLP.
    Anigri, Salma El
    Mohammed V University, Morocco.
    Ndolela, Lolwethu
    Masakhane NLP.
    Mangwana, Thabiso
    Masakhane NLP.
    Mohamed, Shafie Abdi
    Jamhuriya University Of Science and Technology, Somalia.
    Hassan, Ayinde
    LAUTECH, Nigeria.
    Awoyomi, Oluwabusayo Olufunke
    The College of Saint Rose, USA.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Al-Azzawi, Sana
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Etori, Naome A.
    University of Minnesota -Twin Cities, USA.
    Ochieng, Millicent
    Microsoft Africa Research Institute.
    Siro, Clemencia
    University of Amsterdam, Netherlands.
    Njoroge, Samuel
    The Technical University of Kenya.
    Muchiri, Eric
    Masakhane NLP.
    Kimotho, Wangari
    AIMS, Cameroon.
    Momo, Lyse Naomi Wamba
    KU Leuven, Belgium.
    Abolade, Daud
    Masakhane NLP.
    Ajao, Simbiat
    Masakhane NLP.
    Shode, Iyanuoluwa
    Masakhane NLP.
    Macharm, Ricky
    Masakhane NLP.
    Iro, Ruqayya Nasir
    HausaNLP.
    Abdullahi, Saheed S.
    SIAT-CAS, China; Kaduna State University, Nigeria.
    Moore, Stephen E.
    University of Cape Coast, Ghana; Ghana NLP.
    Opoku, Bernard
    Masakhane NLP; Kwame Nkrumah University of Science and Technology, Ghana.
    Akinjobi, Zainab
    Masakhane NLP; New Mexico State University, USA.
    Afolabi, Abeeb
    Masakhane NLP.
    Obiefuna, Nnaemeka
    Masakhane NLP.
    Ogbu, Onyekachi Raphael
    Masakhane NLP.
    Brian, Sam
    Masakhane NLP.
    Otiende, Verrah Akinyi
    USIU-Africa.
    Mbonu, Chinedu Emmanuel
    UNIZIK, Nigeria.
    Sari, Sakayo Toadoum
    AIMS, Senegal.
    Lu, Yao
    University College London, UK.
    Stenetorp, Pontus
    University College London, UK.
    AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages2024In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 / [ed] Duh K.; Gomez H.; Bethard S., Association for Computational Linguistics (ACL) , 2024, p. 5997-6023, article id 200463Conference paper (Refereed)
    Abstract [en]

    Despite the recent progress on scaling multilingual machine translation (MT) to severalunder-resourced African languages, accuratelymeasuring this progress remains challenging,since evaluation is often performed on n-grammatching metrics such as BLEU, which typically show a weaker correlation with humanjudgments. Learned metrics such as COMEThave higher correlation; however, the lack ofevaluation data with human ratings for underresourced languages, complexity of annotationguidelines like Multidimensional Quality Metrics (MQM), and limited language coverageof multilingual encoders have hampered theirapplicability to African languages. In this paper, we address these challenges by creatinghigh-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AFRICOMET: COMETevaluation metrics for African languages byleveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-theart MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).

    Download full text (pdf)
    fulltext
1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf