Digitala Vetenskapliga Arkivet

Ändra sökning
Avgränsa sökresultatet
1234567 1 - 50 av 2581
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. AAl Abdulsalam, Abdulrahman
    et al.
    Velupillai, Sumithra
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Teoretisk datalogi, TCS. King's College, London.
    Meystre, Stephane
    UtahBMI at SemEval-2016 Task 12: Extracting Temporal Information from Clinical Text2016Ingår i: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), Association for Computational Linguistics , 2016, s. 1256-1262Konferensbidrag (Refereegranskat)
    Abstract [en]

    The 2016 Clinical TempEval continued the 2015 shared task on temporal information extraction with a new evaluation test set. Our team, UtahBMI, participated in all subtasks using machine learning approaches with ClearTK (LIBLINEAR), CRF++ and CRFsuite packages. Our experiments show that CRF-based classifiers yield, in general, higher recall for multi-word spans, while SVM-based classifiers are better at predicting correct attributes of TIMEX3. In addition, we show that an ensemble-based approach for TIMEX3 could yield improved results. Our team achieved competitive results in each subtask with an F1 75.4% for TIMEX3, F1 89.2% for EVENT, F1 84.4% for event relations with document time (DocTimeRel), and F1 51.1% for narrative container (CONTAINS) relations.

  • 2.
    Abdou, Mostafa
    et al.
    Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark..
    Ravishankar, Vinit
    Univ Oslo, Dept Informat, Language Technol Grp, Oslo, Norway..
    Kulmizev, Artur
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Sogaard, Anders
    Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark..
    Word Order Does Matter (And Shuffled Language Models Know It)2022Ingår i: PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), ASSOC COMPUTATIONAL LINGUISTICS-ACL Association for Computational Linguistics, 2022, s. 6907-6919Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information. Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucial for models' good performance with shuffled text. We probe these language models for word order information and investigate what position embeddings learned from shuffled text encode, showing that these models retain information pertaining to the original, naturalistic word order. We show this is in part due to a subtlety in how shuffling is implemented in previous work - before rather than after subword segmentation. Surprisingly, we find even Language models trained on text shuffled after subword segmentation retain some semblance of information about word order because of the statistical dependencies between sentence length and unigram probabilities. Finally, we show that beyond GLUE, a variety of language understanding tasks do require word order information, often to an extent that cannot be learned through fine-tuning.

  • 3.
    Abdulmumin, Idris
    et al.
    Ahmadu Bello University, Zaria, Nigeria; HausaNLP.
    Beukman, Michael
    University of the Witwatersrand, South Africa.
    Alabi, Jesujoba O.
    Saarland University, Germany.
    Emezue, Chris
    TUM, Germany; Mila - Quebec AI Institute.
    Asiko, Everlyn
    University of Cape Town, South Africa; African Institute for Mathematical Sciences.
    Adewumi, Oluwatosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Muhammad, Shamsuddeen Hassan
    HausaNLP; LIAAD-INESC TEC, Porto, Portugal.
    Adeyemi, Mofetoluwa
    Uppsala University, Sweden.
    Yousuf, Oreen
    Uppsala University, Sweden.
    Singh, Sahib
    Ford Motor Company.
    Gwadabe, Tajuddeen Rabiu
    HausaNLP; University of Chinese Academy of Sciences, China.
    Separating Grains from the Chaff: Using Data Filtering to Improve Multilingual Translation for Low-Resourced African Languages2022Ingår i: Proceedings of the Seventh Conference on Machine Translation (WMT) / [ed] Philipp Koehn, Loïc Barrault, Ondřej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Tom Kocmi, André Martins, Makoto Morishita, Christof Monz, Masaaki Nagata, Toshiaki Nakazawa, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Marco Turchi, Marcos Zampieri, Association for Computational Linguistics , 2022, s. 1001-1014Konferensbidrag (Refereegranskat)
    Abstract [en]

    We participated in the WMT 2022 Large-Scale Machine Translation Evaluation for the African Languages Shared Task. This work de-scribes our approach, which is based on filtering the given noisy data using a sentence-pair classifier that was built by fine-tuning a pre-trained language model. To train the classifier, we obtain positive samples (i.e. high-quality parallel sentences) from a gold-standard curated dataset and extract negative samples (i.e.low-quality parallel sentences) from automatically aligned parallel data by choosing sentences with low alignment scores. Our final machine translation model was then trained on filtered data, instead of the entire noisy dataset. We empirically validate our approach by evaluating on two common datasets and show that data filtering generally improves overall translation quality, in some cases even significantly.

  • 4.
    Abou Zliekha, M.
    et al.
    Damascus University/Faculty of Information Technology.
    Al Moubayed, Samer
    Damascus University/Faculty of Information Technology.
    Al Dakkak, O.
    Higher Institute of Applied Science and Technology (HIAST).
    Ghneim, N.
    Higher Institute of Applied Science and Technology (HIAST).
    Emotional Audio-Visual Arabic Text to Speech2006Ingår i: Proceedings of the XIV European Signal Processing Conference (EUSIPCO), Florence, Italy, 2006Konferensbidrag (Refereegranskat)
    Abstract [en]

    The goal of this paper is to present an emotional audio-visual. Text to speech system for the Arabic Language. The system is based on two entities: un emotional audio text to speech system which generates speech depending on the input text and the desired emotion type, and un emotional Visual model which generates the talking heads, by forming the corresponding visemes. The phonemes to visemes mapping, and the emotion shaping use a 3-paramertic face model, based on the Abstract Muscle Model. We have thirteen viseme models and five emotions as parameters to the face model. The TTS produces the phonemes corresponding to the input text, the speech with the suitable prosody to include the prescribed emotion. In parallel the system generates the visemes and sends the controls to the facial model to get the animation of the talking head in real time.

  • 5. Abrahamsson, M.
    et al.
    Sundberg, Johan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Subglottal pressure variation in actors’ stage speech2007Ingår i: Voice and Gender Journal for the Voice and Speech Trainers Association / [ed] Rees, M., VASTA Publishing , 2007, s. 343-347Kapitel i bok, del av antologi (Refereegranskat)
  • 6.
    Abrahamsson, Peder
    Linköpings universitet, Institutionen för datavetenskap.
    Mer lättläst: Påbyggnad av ett automatiskt omskrivningsverktyg till lätt svenska2011Självständigt arbete på grundnivå (kandidatexamen), 12 poäng / 18 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Det svenska språket ska finnas tillgängligt för alla som bor och verkar i Sverige. Därförär det viktigt att det finns lättlästa alternativ för dem som har svårighet att läsa svensktext. Detta arbete bygger vidare på att visa att det är möjligt att skapa ett automatisktomskrivningsprogram som gör texter mer lättlästa. Till grund för arbetet liggerCogFLUX som är ett verktyg för automatisk omskrivning till lätt svenska. CogFLUXinnehåller funktioner för att syntaktiskt skriva om texter till mer lättläst svenska.Omskrivningarna görs med hjälp av omskrivningsregler framtagna i ett tidigare projekt.I detta arbete implementeras ytterligare omskrivningsregler och även en ny modul förhantering av synonymer. Med dessa nya regler och modulen ska arbetet undersöka omdet är det är möjligt att skapa system som ger en mer lättläst text enligt etableradeläsbarhetsmått som LIX, OVIX och Nominalkvot. Omskrivningsreglerna ochsynonymhanteraren testas på tre olika texter med en total lägnd på ungefär hundra tusenord. Arbetet visar att det går att sänka både LIX-värdet och Nominalkvoten signifikantmed hjälp av omskrivningsregler och synonymhanterare. Arbetet visar även att det finnsfler saker kvar att göra för att framställa ett riktigt bra program för automatiskomskrivning till lätt svenska.

    Ladda ner fulltext (pdf)
    fulltext
  • 7.
    Adams, Allison
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Dependency Parsing and Dialogue Systems: an investigation of dependency parsing for commercial application2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this thesis, we investigate dependency parsing for commercial application, namely for future integration in a dialogue system. To do this, we conduct several experiments on dialogue data to assess parser performance on this domain, and to improve this performance over a baseline. This work makes the following contributions: first, the creation and manual annotation of a gold-standard data set for dialogue data; second, a thorough error analysis of the data set, comparing neural network parsing to traditional parsing methods on this domain; and finally, various domain adaptation experiments show how parsing on this data set can be improved over a baseline.  We further show that dialogue data is characterized by questions in particular, and suggest a method for improving overall parsing on these constructions. 

    Ladda ner fulltext (pdf)
    fulltext
  • 8.
    Adams, Allison
    et al.
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Stymne, Sara
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Learning with learner corpora: Using the TLE for native language identification2017Ingår i: Proceedings of the joint workshop on NLP for Computer Assisted Language Learning and NLP for Language Acquisition, 2017, s. 1-7Konferensbidrag (Refereegranskat)
    Abstract [en]

    This study investigates the usefulness of the Treebank of Learner English (TLE) when applied to the task of Native Language Identification (NLI). The TLE is effectively a parallel corpus of Standard/Learner English, as there are two versions; one based on original learner essays, and the other an error-corrected version. We use the corpus to explore how useful a parser trained on ungrammatical relations is compared to a parser trained on grammatical relations, when used as features for a native language classification task. While parsing results are much better when trained on grammatical relations, native language classification is slightly better using a parser trained on the original treebank containing ungrammatical relations.

    Ladda ner fulltext (pdf)
    fulltext
  • 9.
    Adelani, David
    et al.
    Saarland Univ, Saarbrucken, Germany..
    Alabi, Jesujoba
    INRIA, Paris, France..
    Fan, Angela
    Meta AI, Menlo Pk, CA USA..
    Kreutzer, Julia
    Google Res, Mountain View, CA USA..
    Shen, Xiaoyu
    Amazon Alexa AI, Seattle, WA USA..
    Reid, Machel
    Univ Tokyo, Tokyo, Japan..
    Ruiter, Dana
    Saarland Univ, Saarbrucken, Germany..
    Klakow, Dietrich
    Saarland Univ, Saarbrucken, Germany..
    Nabende, Peter
    Makerere Univ, Kampala, Uganda..
    Chang, Ernie
    Saarland Univ, Saarbrucken, Germany..
    Gwadabe, Tajuddeen
    UCAS, Beijing, Peoples R China..
    Sackey, Freshia
    JKUAT, Juja, Kenya..
    Dossou, Bonaventure F. P.
    Jacobs Univ, Bremen, Germany..
    Emezue, Chris
    TUM, Munich, Germany..
    Leong, Colin
    Univ Dayton, Dayton, OH 45469 USA..
    Beukman, Michael
    Univ Witwatersrand, Johannesburg, South Africa..
    Muhammad, Shamsuddeen
    LIAAD INESC TEC, Porto, Portugal..
    Jarso, Guyo
    Yousuf, Oreen
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Rubungo, Andre Niyongabo
    UPC, Barcelona, Spain..
    Hacheme, Gilles
    Ai4Innov, Paris, France..
    Wairagala, Eric Peter
    Makerere Univ, Kampala, Uganda..
    Nasir, Muhammad Umair
    Ominor AI, Orlando, FL USA..
    Ajibade, Benjamin
    Ajayi, Tunde
    Gitau, Yvonne
    Abbott, Jade
    Ahmed, Mohamed
    Microsoft Africa Res Inst, Nairobi, Kenya..
    Ochieng, Millicent
    Microsoft Africa Res Inst, Nairobi, Kenya..
    Aremu, Anuoluwapo
    Ogayo, Perez
    CMU, Pittsburgh, PA USA..
    Mukiibi, Jonathan
    Makerere Univ, Kampala, Uganda..
    Kabore, Fatoumata Ouoba
    Kalipe, Godson
    Mbaye, Derguene
    Baamtu, Dakar, Senegal..
    Tapo, Allahsera Auguste
    RIT, Rochester, NY USA..
    Koagne, Victoire Memdjokam
    Munkoh-Buabeng, Edwin
    Wagner, Valencia
    SPU, Kimberley, South Africa..
    Abdulmumin, Idris
    ABU, Abuja, Nigeria..
    Awokoya, Ayodele
    UI Ibadan, Ibadan, Nigeria..
    Buzaaba, Happy
    Sibanda, Blessing
    NUST, Windhoek, Namibia..
    Bukula, Andiswa
    SADiLaR, Potchefstroom, South Africa..
    Manthalu, Sam
    Univ Malawi, Zomba, Malawi..
    A Few Thousand Translations Go A Long Way! Leveraging Pre-trained Models for African News Translation2022Ingår i: NAACL 2022: The 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Stroudsburg: Association for Computational Linguistics, 2022, s. 3053-3070Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent advances in the pre-training of language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages are not well represented on the web and therefore excluded from the large-scale crawls used to create datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pre-training? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a new African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both to additional languages and to additional domains is to fine-tune large pre-trained models on small quantities of high-quality translation data.

  • 10.
    Adelani, David Ifeoluwa
    et al.
    Spoken Language Systems Group (LSV), Saarland University, Germany; Masakhane NLP.
    Abbott, Jade
    Retro Rabbit, South Africa; Masakhane NLP.
    Neubig, Graham
    Language Technologies Institute, Carnegie Mellon University, United States.
    D'souza, Daniel
    ProQuest, United States; Masakhane NLP.
    Kreutzer, Julia
    Google Research, Canada; Masakhane NLP.
    Lignos, Constantine
    Brandeis University, United States; Masakhane NLP.
    Palen-Michel, Chester
    Brandeis University, United States; Masakhane NLP.
    Buzaaba, Happy
    Graduate School of Systems and Information Engineering, University of Tsukuba, Japan; Masakhane NLP.
    Rijhwani, Shruti
    Language Technologies Institute, Carnegie Mellon University, United States.
    Ruder, Sebastian
    DeepMind, United Kingdom.
    Mayhew, Stephen
    Duolingo, United States.
    Abebe Azime, Israel
    African Institute for Mathematical Sciences (AIMS-AMMI), Ethiopia; Masakhane NLP.
    Muhammad, Shamsuddeen H.
    University of Porto, Nigeria; Bayero University, Kano, Nigeria.
    Emezue, Chris Chinenye
    Technical University of Munich, Germany; Masakhane NLP.
    Nakatuma-Nabende, Joyce
    Makerere University, Kampala, Uganda; Masakhane NLP.
    Ogayo, Perez
    African Leadership University, Rwanda; Masakhane NLP.
    Anuoluwapo, Aremu
    University of Lagos, Nigeria; Masakhane NLP.
    Gitau, Catherine
    Masakhane NLP.
    Mbaye, Derguene
    Masakhane NLP.
    Alabi, Jesujoba
    Max Planck Institute for Informatics, Germany; Masakhane NLP.
    Yimam, Seid Muhie
    LT Group, Universität Hamburg, Germany.
    Gwadabe, Tajuddeen Rabiu
    University of Chinese Academy of Science, China; Masakhane NLP.
    Ezeani, Ignatius
    Lancaster University, United Kingdom; Masakhane NLP.
    Niyongabo, Rubungo Andre
    University of Electronic Science and Technology of China, China; Masakhane NLP.
    Mukiibi, Jonathan
    Makerere University, Kampala, Uganda.
    Otiende, Verrah
    United States International University - Africa (USIU-A), Kenya; Masakhane NLP.
    Orife, Iroro
    Niger-Volta LTI; Masakhane NLP.
    David, Davis
    Masakhane NLP.
    Ngom, Samba
    Masakhane NLP.
    Adewumi, Tosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Masakhane NLP.
    Rayson, Paul
    Lancaster University, United Kingdom.
    Adeyemi, Mofetoluwa
    Masakhane NLP.
    Muriuki, Gerald
    Makerere University, Kampala, Uganda.
    Anebi, Emmanuel
    Masakhane NLP.
    Chukwuneke, Chimaka
    Masakhane NLP.
    Odu, Nkiruka
    African University of Science and Technology, Abuja, Nigeria.
    Wairagala, Eric Peter
    Makerere University, Kampala, Uganda.
    Oyerinde, Samuel
    Masakhane NLP.
    Siro, Clemencia
    Masakhane NLP.
    Bateesa, Tobius Saul
    Makerere University, Kampala, Uganda.
    Oloyede, Temilola
    Masakhane NLP.
    Wambui, Yvonne
    Masakhane NLP.
    Akinode, Victor
    Masakhane NLP.
    Nabagereka, Deborah
    Makerere University, Kampala, Uganda.
    Katusiime, Maurice
    Makerere University, Kampala, Uganda.
    Awokoya, Ayodele
    University of Ibadan, Nigeria; Masakhane NLP.
    Mboup, Mouhamadane
    Masakhane NLP.
    Gebreyohannes, Dibora
    Masakhane NLP.
    Tilaye, Henok
    Masakhane NLP.
    Nwaike, Kelechi
    Masakhane NLP.
    Wolde, Degaga
    Masakhane NLP.
    Faye, Abdoulaye
    Masakhane NLP.
    Sibanda, Blessing
    Namibia University of Science and Technology, Namibia; Masakhane NLP.
    Ahia, Orevaoghene
    Instadeep, Nigeria; Masakhane NLP.
    Dossou, Bonaventure F. P.
    Jacobs University Bremen, Germany; Masakhane NLP.
    Ogueji, Kelechi
    University of Waterloo, Canada; Masakhane NLP.
    Diop, Thierno Ibrahima
    Masakhane NLP.
    Diallo, Abdoulaye
    Masakhane NLP.
    Akinfaderin, Adewale
    Masakhane NLP.
    Marengereke, Tendai
    Masakhane NLP.
    Osei, Salomey
    African Institute for Mathematical Sciences (AIMS-AMMI), Ethiopia; Masakhane NLP.
    MasakhaNER: Named Entity Recognition for African Languages2021Ingår i: Transactions of the Association for Computational Linguistics, E-ISSN 2307-387X, Vol. 9, s. 1116-1131Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We take a step towards addressing the under-representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of state-of-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.

  • 11.
    Adelani, David Ifeoluwa
    et al.
    Masakhane NLP; Saarland University, Germany; University College London, UK.
    Neubig, Graham
    Carnegie Mellon University, USA.
    Ruder, Sebastian
    Google Research.
    Rijhwani, Shruti
    Carnegie Mellon University, USA.
    Beukman, Michael
    Masakhane NLP; University of the Witwatersrand, South Africa.
    Palen-Michel, Chester
    Masakhane NLP; Brandeis University, USA.
    Lignos, Constantine
    Masakhane NLP; Brandeis University, USA.
    Alabi, Jesujoba O.
    Masakhane NLP; Saarland University, Germany.
    Muhammad, Shamsuddeen H.
    Masakhane NLP; LIAAD-INESC TEC, Portugal.
    Nabende, Peter
    Masakhane NLP; Makerere University, Uganda.
    Bamba Dione, Cheikh M.
    Masakhane NLP; University of Bergen, Norway.
    Bukula, Andiswa
    SADiLaR, South Africa.
    Mabuya, Rooweither
    SADiLaR, South Africa.
    Dossou, Bonaventure F.P.
    Masakhane NLP; Mila Quebec AI Institute, Canada.
    Sibanda, Blessing
    Masakhane NLP.
    Buzaaba, Happy
    Masakhane NLP; RIKEN Center for AI Project, Japan.
    Mukiibi, Jonathan
    Masakhane NLP; Makerere University, Uganda.
    Kalipe, Godson
    Masakhane NLP.
    Mbaye, Derguene
    Masakhane NLP; Baamtu, Senegal.
    Taylor, Amelia
    Masakhane NLP; Malawi University of Business and Applied Science, Malawi.
    Kabore, Fatoumata
    Masakhane NLP; Uppsala University, Sweden.
    Emezue, Chris Chinenye
    Masakhane NLP; TU Munich, Germany.
    Aremu, Anuoluwapo
    Masakhane NLP.
    Ogayo, Perez
    Masakhane NLP; Carnegie Mellon University, USA.
    Gitau, Catherine
    Masakhane NLP.
    Munkoh-Buabeng, Edwin
    Masakhane NLP; TU Clausthal, Germany.
    Koagne, Victoire M.
    Masakhane NLP.
    Tapo, Allahsera Auguste
    Masakhane NLP; Rochester Institute of Technology, USA.
    Macucwa, Tebogo
    Masakhane NLP; University of Pretoria, South Africa.
    Marivate, Vukosi
    Masakhane NLP; University of Pretoria, South Africa.
    Mboning, Elvis
    Masakhane NLP.
    Gwadabe, Tajuddeen
    Masakhane NLP.
    Adewumi, Tosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Masakhane NLP.
    Ahia, Orevaoghene
    Masakhane NLP; University of Washington, USA.
    Nakatumba-Nabende, Joyce
    Masakhane NLP; Makerere University, Uganda.
    Mokono, Neo L.
    Masakhane NLP; University of Pretoria, South Africa.
    Ezeani, Ignatius
    Masakhane NLP; Lancaster University, UK.
    Chukwuneke, Chiamaka
    Masakhane NLP; Lancaster University, UK.
    Adeyemi, Mofetoluwa
    Masakhane NLP; University of Waterloo, Canada.
    Hacheme, Gilles Q.
    Masakhane NLP; Ai4innov, France.
    Abdulmumin, Idris
    Masakhane NLP; Ahmadu Bello University, Nigeria.
    Ogundepo, Odunayo
    Masakhane NLP; University of Waterloo, Canada.
    Yousuf, Oreen
    Masakhane NLP; Uppsala University, Sweden.
    Ngoli, Tatiana Moteu
    Masakhane NLP.
    Klakow, Dietrich
    Saarland University, Germany.
    MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition2022Ingår i: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (ACL) , 2022, s. 4488-4508Konferensbidrag (Refereegranskat)
    Abstract [en]

    African languages are spoken by over a billion people, but are underrepresented in NLP research and development. The challenges impeding progress include the limited availability of annotated datasets, as well as a lack of understanding of the settings where current methods are effective. In this paper, we make progress towards solutions for these challenges, focusing on the task of named entity recognition (NER). We create the largest human-annotated NER dataset for 20 African languages, and we study the behavior of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, demonstrating that the choice of source language significantly affects performance. We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points across 20 languages compared to using English. Our results highlight the need for benchmark datasets and models that cover typologically-diverse African languages.

  • 12.
    Adesam, Yvonne
    Stockholms universitet, Humanistiska fakulteten, Institutionen för lingvistik.
    The Multilingual Forest: Investigating High-quality Parallel Corpus Development2012Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [sv]

    I denna doktorsavhandling utforskas skapandet av parallella trädbanker. Dessa är språkliga data som består av texter och deras översättningar, som har märkts upp med syntaktisk information samt länkar mellan ord, fraser och meningar som motsvarar varandra i översättningarna. Vi beskriver den delvis manuella uppmärkningen av den parallella trädbanken SMULTRON, med 1.000 engelska, tyska och svenska meningar. Denna beskrivning är utgångspunkt för att besvara den första av två frågor i avhandlingen.

    • Vilka frågor måste beaktas för att skapa en högkvalitativ parallell trädbank?

    De enheter som märks upp samt valet av uppmärkningssystemet är viktiga för kvaliteten, och en viss andel automatisk bearbetning är nödvändig för att utöka storleken. Automatiska kvalitetskontroller och automatisk utvärdering är av vikt, men viss manuell granskning är nödvändig för att uppnå hög kvalitet.

    Vidare utforskar vi att använda information som finns i uppmärkningen, för att förbättra den automatiskt skapade uppmärkningen för ett annat språk. Detta leder oss till den andra av de två frågorna i avhandlingen.

    • Kan vi förbättra automatisk uppmärkning genom att överföra information som finns i de andra språken?

    Experimenten visar att automatisk länkning som överförs från två språkpar, L1–L2 och L1–L3, till det tredje språkparet, L2–L3, får förbättrad precision, framför allt för skärningspunkten mellan den överförda länkningen och den automatiska länkningen. Vi skapar även en testsamling för experiment med överföring av uppmärkning för att lösa upp strukturella flertydigheter hos prepositionsfraser. Överföring enligt majoritetsprincipen förbättrar uppmärkningen, jämfört med den grundläggande automatiska uppmärkningen, men att använda språkliga ledtrådar för att korrigera uppmärkningen innan majoritetsöverföring är ännu bättre, om än mer arbetskrävande. Vissa felaktiga strukturer kan dock inte korrigeras med hjälp av överföring, eftersom de olika språken använder olika formuleringar, och därmed har olika strukturer.

    Ladda ner fulltext (pdf)
    fulltext
  • 13.
    Adewumi, Oluwatosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Word Vector Representations using Shallow Neural Networks2021Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This work highlights some important factors for consideration when developing word vector representations and data-driven conversational systems. The neural network methods for creating word embeddings have gained more prominence than their older, count-based counterparts.However, there are still challenges, such as prolonged training time and the need for more data, especially with deep neural networks. Shallow neural networks with lesser depth appear to have the advantage of less complexity, however, they also face challenges, such as sub-optimal combination of hyper-parameters which produce sub-optimal models. This work, therefore, investigates the following research questions: "How importantly do hyper-parameters influence word embeddings’ performance?" and "What factors are important for developing ethical and robust conversational systems?" In answering the questions, various experiments were conducted using different datasets in different studies. The first study investigates, empirically, various hyper-parameter combinations for creating word vectors and their impact on a few natural language processing (NLP) downstream tasks: named entity recognition (NER) and sentiment analysis (SA). The study shows that optimal performance of embeddings for downstream \acrshort{nlp} tasks depends on the task at hand.It also shows that certain combinations give strong performance across the tasks chosen for the study. Furthermore, it shows that reasonably smaller corpora are sufficient or even produce better models in some cases and take less time to train and load. This is important, especially now that environmental considerations play prominent role in ethical research. Subsequent studies build on the findings of the first and explore the hyper-parameter combinations for Swedish and English embeddings for the downstream NER task. The second study presents the new Swedish analogy test set for evaluation of Swedish embeddings. Furthermore, it shows that character n-grams are useful for Swedish, a morphologically rich language. The third study shows that broad coverage of topics in a corpus appears to be important to produce better embeddings and that noise may be helpful in certain instances, though they are generally harmful. Hence, relatively smaller corpus can show better performance than a larger one, as demonstrated in the work with the smaller Swedish Wikipedia corpus against the Swedish Gigaword. The argument is made, in the final study (in answering the second question) from the point of view of the philosophy of science, that the near-elimination of the presence of unwanted bias in training data and the use of foralike the peer-review, conferences, and journals to provide the necessary avenues for criticism and feedback are instrumental for the development of ethical and robust conversational systems.

    Ladda ner fulltext (pdf)
    fulltext
  • 14.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Brännvall, Rickard
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. RISE Research Institutes of Sweden.
    Abid, Nosheen
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Pahlavan, Maryam
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Sabah Sabry, Sana
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning2022Ingår i: Proceedings of the Northern Lights Deep Learning Workshop 2022 / [ed] Sigurd Løkse, Benjamin Ricaud, Septentrio Academic Publishing , 2022, Vol. 3Konferensbidrag (Refereegranskat)
    Abstract [en]

    Building open-domain conversational systems (or chatbots) that produce convincing responses is a recognized challenge. Recent state-of-the-art (SoTA) transformer-based models for the generation of natural language dialogue have demonstrated impressive performance in simulating human-like, single-turn conversations in English.This work investigates, by an empirical study, the potential for transfer learning of such models to Swedish language. DialoGPT, an English language pre-trained model, is adapted by training on three different Swedish language conversational datasets obtained from publicly available sources: Reddit, Familjeliv and the GDC. Perplexity score (an automated intrinsic metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models. We also compare the DialoGPT experiments with an attention-mechanism-based seq2seq baseline model, trained on the GDC dataset. The results indicate that the capacity for transfer learning can be exploited with considerable success. Human evaluators asked to score the simulated dialogues judged over 57% of the chatbot responses to be human-like for the model trained on the largest (Swedish) dataset. The work agrees with the hypothesis that deep monolingual models learn some abstractions which generalize across languages. We contribute the codes, datasets and model checkpoints and host the demos on the HuggingFace platform.

  • 15.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Exploring Swedish & English fastText Embeddings2022Ingår i: Artificial Intelligence and Cognition 2022: Proceedings of the 8th International Workshop on Artificial Intelligence and Cognition / [ed] Hadi Banaee, Amy Loutfi, Alessandro Saffiotti, Antonio Lieto, 2022, Vol. 3400, s. 201-208Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we show that embeddings from relatively smaller corpora sometimes outperform thosefrom larger corpora and we introduce a new Swedish analogy test set and make it publicly available.To achieve good performance in Natural Language Processing (NLP) downstream tasks, several factorsplay important roles: dataset size, the right hyper-parameters, and well-trained embeddings. We utilizethe fastText tool for our experiments. We evaluate both the Swedish and English embeddings that wecreated using intrinsic evaluation (including analogy & Spearman correlation) and compare them with2 common, publicly available embeddings. Our English continuous Bag-of-Words (CBoW)-negativesampling embedding shows better performance compared to the publicly available GoogleNews version.We also describe the relationship between NLP and cognitive science. We contribute the embeddings forresearch or other useful purposes by publicly releasing them.

    Ladda ner fulltext (pdf)
    fulltext
  • 16.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Exploring Swedish & English fastText Embeddings for NER with the TransformerManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    In this paper, our main contributions are that embeddings from relatively smaller corpora can outperform ones from far larger corpora and we present the new Swedish analogy test set. To achieve a good network performance in natural language processing (NLP) downstream tasks, several factors play important roles: dataset size, the right hyper-parameters, and well-trained embeddings. We show that, with the right set of hyper-parameters, good network performance can be reached even on smaller datasets. We evaluate the embeddings at the intrinsic level and extrinsic level, by deploying them on the Transformer in named entity recognition (NER) task and conduct significance tests. This is done for both Swedish and English. We obtain better performance in both languages on the downstream task with far smaller training data, compared to recently released, common crawl versions; and character n-grams appear useful for Swedish, a morphologically rich language.

  • 17.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Word2Vec: Optimal hyperparameters and their impact on natural language processing downstream tasks2022Ingår i: Open Computer Science, E-ISSN 2299-1093, Vol. 12, nr 1, s. 134-141Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Word2Vec is a prominent model for natural language processing tasks. Similar inspiration is found in distributed embeddings (word-vectors) in recent state-of-the-art deep neural networks. However, wrong combination of hyperparameters can produce embeddings with poor quality. The objective of this work is to empirically show that Word2Vec optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the publicly released, original Word2Vec embedding. Both intrinsic and extrinsic (downstream) evaluations are carried out, including named entity recognition and sentiment analysis. Our main contributions include showing that the best model is usually task-specific, high analogy scores do not necessarily correlate positively with F1 scores, and performance is not dependent on data size alone. If ethical considerations to save time, energy, and the environment are made, then relatively smaller corpora may do just as well or even better in some cases. Increasing the dimension size of embeddings after a point leads to poor quality or performance. In addition, using a relatively small corpus, we obtain better WordSim scores, corresponding Spearman correlation, and better downstream performances (with significance tests) compared to the original model, which is trained on a 100 billion-word corpus.

  • 18.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream TasksManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Word2Vec is a prominent model for natural language processing (NLP) tasks. Similar nspiration is found in distributed embeddings for new state-of-the-art (SotA) deep neural networks.  However, wrong combination of hyper-parameters can produce poor quality vectors. The objective of this work is to empirically show optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the released, pre-trained original word2vec model. Both intrinsic and extrinsic (downstream) evaluations, including named entity recognition (NER) and sentiment analysis (SA) were carried out. The downstream tasks reveal that the best model is usually task-specific, high analogy scores don’t necessarily correlate positively with F1 scores and the same applies to focus on data alone. Increasing vector dimension size after a point leads to poor quality or performance. If ethical considerations to save time, energy and the environment are made, then reasonably smaller corpora may do just as well or even better in some cases. Besides, using a small corpus, we obtain better human-assigned WordSim scores, corresponding Spearman correlation and better downstream performances (with significance tests) compared to the original model, trained on 100 billion-word corpus.

  • 19.
    Adewumi, Tosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Masakhane.
    Adeyemi, Mofetoluwa
    Masakhane.
    Anuoluwapo, Aremu
    Masakhane.
    Peters, Bukola
    CIS.
    Buzaaba, Happy
    Masakhane.
    Samuel, Oyerinde
    Masakhane.
    Rufai, Amina Mardiyyah
    Masakhane.
    Ajibade, Benjamin
    Masakhane.
    Gwadabe, Tajudeen
    Masakhane.
    Koulibaly Traore, Mory Moussou
    Masakhane.
    Ajayi, Tunde Oluwaseyi
    Masakhane.
    Muhammad, Shamsuddeen
    Baruwa, Ahmed
    Masakhane.
    Owoicho, Paul
    Masakhane.
    Ogunremi, Tolulope
    Masakhane.
    Ngigi, Phylis
    Jomo Kenyatta University of Agriculture and Technology.
    Ahia, Orevaoghene
    Masakhane.
    Nasir, Ruqayya
    Masakhane.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    AfriWOZ: Corpus for Exploiting Cross-Lingual Transfer for Dialogue Generation in Low-Resource, African Languages2023Ingår i: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2023Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dialogue generation is an important NLP task fraught with many challenges. The challenges become more daunting for low-resource African languages. To enable the creation of dialogue agents for African languages, we contribute the first high-quality dialogue datasets for 6 African languages: Swahili, Wolof, Hausa, Nigerian Pidgin English, Kinyarwanda & Yorùbá. There are a total of 9,000 turns, each language having 1,500 turns, which we translate from a portion of the English multi-domain MultiWOZ dataset. Subsequently, we benchmark by investigating & analyzing the effectiveness of modelling through transfer learning by utilziing state-of-the-art (SoTA) deep monolingual models: DialoGPT and BlenderBot. We compare the models with a simple seq2seq baseline using perplexity. Besides this, we conduct human evaluation of single-turn conversations by using majority votes and measure inter-annotator agreement (IAA). We find that the hypothesis that deep monolingual models learn some abstractions that generalize across languages holds. We observe human-like conversations, to different degrees, in 5 out of the 6 languages. The language with the most transferable properties is the Nigerian Pidgin English, with a human-likeness score of 78.1%, of which 34.4% are unanimous. We freely provide the datasets and host the model checkpoints/demos on the HuggingFace hub for public access.

  • 20.
    Adewumi, Tosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Södergren, Isabella
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Digitala tjänster och system.
    Alkhaled, Lama
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Sabry, Sana Sabah
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Bipol: Multi-axes Evaluation of Bias with Explainability in BenchmarkDatasets2023Ingår i: Proceedings of Recent Advances in Natural Language Processing / [ed] Galia Angelova, Maria Kunilovskaya and Ruslan Mitkov, Incoma Ltd. , 2023, s. 1-10Konferensbidrag (Refereegranskat)
    Abstract [en]

    We investigate five English NLP benchmark datasets (on the superGLUE leaderboard) and two Swedish datasets for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), Recognising Textual Entailment (RTE), Swedish CB, and SWEDN. Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to estimate and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large Swedish bias-labeled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We make the codes, model, and new dataset publicly available.

  • 21. Adkisson, J. M.
    et al.
    Westlund, Johannes
    KTH.
    Masuhara, H.
    A shell-like model for general purpose programming2019Ingår i: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    Shell scripting languages such as bash are designed to integrate with an OS, which mainly involves managing processes with implicit input and output streams. They also attempt to do this in a compact way that could be reasonably typed on a command-line interface. However, existing shell languages are not sufficient to serve as general-purpose languages-values are not observable except in raw streams of bytes, and they lack modern language features such as lexical scope and higher-order functions. By way of a new programming language, Magritte, we propose a general-purpose programming language with semantics similar to bash. In this paper, we discuss the early design of such a system, in which the primary unit of composition, like bash, is processes with input and output channels, which can be read from or written to at any time, and which can be chained together via a pipe operator. We also explore concurrency semantics for such a language.

  • 22. Agić, Zeljko
    et al.
    Tiedemann, Jörg
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Merkler, Danijela
    Krek, Simon
    Dobrovoljc, Kaja
    Moze, Sara
    Cross-lingual Dependency Parsing of Related Languages with Rich Morphosyntactic Tagsets2014Ingår i: Proceedings of the EMNLP’2014 Workshop on Language Technology for Closely Related Languages and Language Variants, 2014, s. 13-24Konferensbidrag (Refereegranskat)
  • 23.
    Ahlbom, Viktoria
    et al.
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik.
    Sågvall Hein, Anna
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik.
    Test Suites Covering the Functional Specifications of the Sub-components of the Swedish Prototype1999Ingår i: Working Papers in Computational Linguistics & Language Engineering;13, ISSN 1401-923X, nr 13, s. 28-Artikel i tidskrift (Övrigt vetenskapligt)
  • 24.
    Ahlbom, Viktoria
    et al.
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik.
    Sågvall Hein, Anna
    Test Suites Covering the Functional Specifications of the Sub-components of the Swedish Prototype1999Ingår i: Working Papers in Computational Linguistics & Language Engineering;13, ISSN 1401-923X, nr 13, s. 28-Artikel i tidskrift (Övrigt vetenskapligt)
  • 25.
    Ahltorp, Magnus
    et al.
    Institutet för språk och folkminnen, Språkrådet.
    Dürlich, Luise
    Uppsala universitet.
    Skeppstedt, Maria
    Textual Contexts for "Democracy": Using Topic- and Word-Models for Exploring Swedish Government Official Reports2021Konferensbidrag (Refereegranskat)
    Abstract [en]

    We here demonstrate how two types of NLP models - a topic model and a word2vec model - can be combined for exploring the content of a collection of Swedish Government Reports. We investigate if there are topics that frequently occur in paragraphs mentioning the word "democracy". Using the word2vec model, 530 clusters of semantically similar words were created, which were then applied in the pre-processing step when creating a topic model. This model detected 15 reoccurring topics among the paragraphs containing "democracy". Among these topics, 13 had closely associated paragraphs with a coherent content relating to some aspect of democracy.

    Ladda ner fulltext (pdf)
    fulltext
  • 26.
    Ahltorp, Magnus
    et al.
    Stockholm.
    Skeppstedt, Maria
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV). Gavagai, Stockholm.
    Kitajima, Shiho
    Hokkaido Univ, Japan.
    Henriksson, Aron
    Stockholm University.
    Rzepka, Rafal
    Hokkaido Univ, Japan.
    Araki, Kenji
    Hokkaido Univ, Japan.
    Expansion of medical vocabularies using distributional semantics on Japanese patient blogs2016Ingår i: Journal of Biomedical Semantics, E-ISSN 2041-1480, Vol. 7, artikel-id 58Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Background: Research on medical vocabulary expansion from large corpora has primarily been conducted using text written in English or similar languages, due to a limited availability of large biomedical corpora in most languages. Medical vocabularies are, however, essential also for text mining from corpora written in other languages than English and belonging to a variety of medical genres. The aim of this study was therefore to evaluate medical vocabulary expansion using a corpus very different from those previously used, in terms of grammar and orthographics, as well as in terms of text genre. This was carried out by applying a method based on distributional semantics to the task of extracting medical vocabulary terms from a large corpus of Japanese patient blogs. Methods: Distributional properties of terms were modelled with random indexing, followed by agglomerative hierarchical clustering of 3x100 seed terms from existing vocabularies, belonging to three semantic categories: Medical Finding, Pharmaceutical Drug and Body Part. By automatically extracting unknown terms close to the centroids of the created clusters, candidates for new terms to include in the vocabulary were suggested. The method was evaluated for its ability to retrieve the remaining n terms in existing medical vocabularies. Results: Removing case particles and using a context window size of 1 + 1 was a successful strategy for Medical Finding and Pharmaceutical Drug, while retaining case particles and using a window size of 8 + 8 was better for Body Part. For a 10n long candidate list, the use of different cluster sizes affected the result for Pharmaceutical Drug, while the effect was only marginal for the other two categories. For a list of top n candidates for Body Part, however, clusters with a size of up to two terms were slightly more useful than larger clusters. For Pharmaceutical Drug, the best settings resulted in a recall of 25 % for a candidate list of top n terms and a recall of 68 % for top 10n. For a candidate list of top 10n candidates, the second best results were obtained for Medical Finding: a recall of 58 %, compared to 46 % for Body Part. Only taking the top n candidates into account, however, resulted in a recall of 23 % for Body Part, compared to 16 % for Medical Finding. Conclusions: Different settings for corpus pre-processing, window sizes and cluster sizes were suitable for different semantic categories and for different lengths of candidate lists, showing the need to adapt parameters, not only to the language and text genre used, but also to the semantic category for which the vocabulary is to be expanded. The results show, however, that the investigated choices for pre-processing and parameter settings were successful, and that a Japanese blog corpus, which in many ways differs from those used in previous studies, can be a useful resource for medical vocabulary expansion.

  • 27.
    Ahmady, Tobias
    et al.
    KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Data- och elektroteknik.
    Klein Rosmar, Sander
    KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Data- och elektroteknik.
    Translation of keywords between English and Swedish2014Självständigt arbete på grundnivå (högskoleexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    I detta projekt har vi undersökt hur man utför regelbaserad maskinöver- sättning av nyckelord mellan två språk. Målet var att översätta en given mängd med ett eller flera nyckelord på ett källspråk till en motsvarande, lika stor mängd nyckelord på målspråket. Vissa ord i källspråket kan dock ha flera betydelser och kan översättas till flera, eller inga, ord på målsprå- ket. Om tvetydiga översättningar uppstår ska nyckelordets bästa över- sättning väljas med hänsyn till sammanhanget. I traditionell maskinö- versättning bestäms ett ords sammanhang av frasen eller meningen som det befinner sig i. I det här projektet representerar den givna mängden nyckelord sammanhanget.

    Genom att undersöka traditionella tillvägagångssätt för maskinöversätt- ning har vi designat och beskrivit modeller specifikt för översättning av nyckelord. Vi har presenterat en direkt maskinöversättningslösning av nyckelord mellan engelska och svenska där vi introducerat en enkel graf- baserad modell för tvetydiga översättningar. 

    Ladda ner fulltext (pdf)
    Translation of keywords between English and Swedish
  • 28.
    Ahrenberg, Lars
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    A Simple Hybrid Aligner for Generating Lexical Correspondences in Parallel Texts.1998Ingår i: Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL'98) / [ed] Pierre Isabelle, Stroudsburg, PA, USA: The Association for Computational Linguistics , 1998, s. 29-35Konferensbidrag (Refereegranskat)
  • 29.
    Ahrenberg, Lars
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    Alignment-based profiling of Europarl data in an English-Swedish parallel corpus2010Ingår i: Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) / [ed] Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis and Mike Rosner and Daniel Tapias, Paris, France: European Language Resources Association (ELRA) , 2010, s. 3398-3404Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper profiles the Europarl part of an English-Swedish parallel corpus and compares it with three other subcorpora of the sameparallel corpus. We first describe our method for comparison which is based on alignments, both at the token level and the structurallevel. Although two of the other subcorpora contains fiction, it is found that the Europarl part is the one having the highest proportion ofmany types of restructurings, including additions, deletions and long distance reorderings. We explain this by the fact that the majorityof Europarl segments are parallel translations.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 30.
    Ahrenberg, Lars
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Comparing machine translation and human translation: A case study2017Ingår i: RANLP 2017 The First Workshop on Human-Informed Translation and Interpreting Technology (HiT-IT) Proceedings of the Workshop, September 7th, 2017 / [ed] Irina Temnikova, Constantin Orasan, Gloria Corpas and Stephan Vogel, Shoumen, Bulgaria: Association for Computational Linguistics , 2017, s. 21-28Konferensbidrag (Refereegranskat)
    Abstract [en]

    As machine translation technology improves comparisons to human performance are often made in quite general and exaggerated terms. Thus, it is important to be able to account for differences accurately. This paper reports a simple, descriptive scheme for comparing translations and applies it to two translations of a British opinion article published in March, 2017. One is a human translation (HT) into Swedish, and the other a machine translation (MT). While the comparison is limited to one text, the results are indicative of current limitations in MT.

    Ladda ner fulltext (pdf)
    Comparing machine translation and human translation: A case study
  • 31.
    Ahrenberg, Lars
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Converting an English-Swedish Parallel Treebank to Universal Dependencies2015Ingår i: Proceedings of the Third International Conference on Dependency Linguistics (DepLing 2015), Association for Computational Linguistics, 2015, s. 10-19, artikel-id W15-2103Konferensbidrag (Refereegranskat)
    Abstract [en]

    The paper reports experiences of automatically converting the dependency analysis of the LinES English-Swedish parallel treebank to universal dependencies (UD). The most tangible result is a version of the treebank that actually employs the relations and parts-of-speech categories required by UD, and no other. It is also more complete in that punctuation marks have received dependencies, which is not the case in the original version. We discuss our method in the light of problems that arise from the desire to keep the syntactic analyses of a parallel treebank internally consistent, while available monolingual UD treebanks for English and Swedish diverge somewhat in their use of UD annotations. Finally, we compare the output from the conversion program with the existing UD treebanks.

    Ladda ner fulltext (pdf)
    fulltext
  • 32.
    Ahrenberg, Lars
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Towards a research infrastructure for translation studies.2014Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In principle the CLARIN research infrastructure provides a good environment to support research on translation. In reality, the progress within CLARIN in this area seems to be fairly slow. In this paper I will give examples of the resources currently available, and suggest what is needed to achieve a relevant research infrastructure for translation studies. Also, I argue that translation studies has more to gain from language technology, and statistical machine translation in particular, than what is generally assumed, and give some examples.

    Ladda ner fulltext (pdf)
    fulltext
  • 33.
    Ahrenberg, Lars
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Towards an adequate account of parataxis in Universal Dependencies2019Ingår i: Proceedings of the Third Workshop on Universal Dependencies (UDW, SyntaxFest 2019) / [ed] Alexandre Rademaker, Francis Tyers, Association for Computational Linguistics, 2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    The parataxis relation as defined for Universal Dependencies 2.0 is general and, for this reason,sometimes hard to distinguish from competing analyses, such as coordination, conj, or apposi-tion, appos. The specific subtypes that are listed for parataxis are also quite different in character.In this study we first show that the actual practice by UD-annotators is varied, using the parallelUD (PUD-) treebanks as data. We then review the current definitions and guidelines and suggestimprovements.

  • 34.
    Ahrenberg, Lars and Merkel, Magnus and Ridings, Daniel and Sågvall Hein, Anna and Tiedemann, Jörg
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik.
    Automatic processing of parallel corpora: A Swedish perspective.1999Rapport (Övrigt vetenskapligt)
    Abstract [en]

    As empirical methods have come to the fore in language technology and translation studies, the processing of parallel texts and parallel corpora have become a major issue. In this article we review the state of the art in alignment and data extraction tec

  • 35.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Danielsson, Henrik
    Linköpings universitet, Institutet för handikappvetenskap (IHV). Linköpings universitet, Institutionen för beteendevetenskap och lärande, Handikappvetenskap. Linköpings universitet, Filosofiska fakulteten.
    Bengtsson, Staffan
    The Swedish Institute for Disability Research, Jönköping University, Sweden.
    Arvå, Hampus
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Holme, Lotta
    Linköpings universitet, Institutionen för beteendevetenskap och lärande, Pedagogik och didaktik. Linköpings universitet, Utbildningsvetenskap.
    Jönsson, Arne
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Studying Disability Related Terms with Swe-Clarin Resources2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    In Swedish, as in other languages, the words used to refer to disabilities and people with disabilities are manifold. Recommendations as to which terms to use have been changed several times over the last hundred years. In this exploratory paper we have used textual resources provided by Swe-Clarin to study such changes quantitatively. We demonstrate that old and new recommendations co-exist for long periods of time, and that usage sometimes converges.

    Ladda ner fulltext (pdf)
    Introduction to proceedings
    Ladda ner fulltext (pdf)
    Article in full text
  • 36.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Holmer, Daniel
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Holmlid, Stefan
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Arne
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Analysing Changes in Official Use of the Design Concept Using SweCLARIN Resources2022Ingår i: Proceedings of the CLARIN Annual meeting, 2022Konferensbidrag (Refereegranskat)
    Abstract [en]

    We show how the tools and language resources developed within the SweClarin infrastructure can be used to investigate changes in the use and understanding of the Swedish related words arkitektur, design, form, and formgivning. Specifically, we compare their use in two governmental public reports on design, one from 1999 and the other from 2015. We test the hypothesis that their meaning has developed in a way that blurs distinctions that may be important to stakeholders in the respective fields.

  • 37.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Holmer, Daniel
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Holmlid, Stefan
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Arne
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Analysing changes in official use of the design concept using SweCLARIN resources2023Ingår i: Selected papers from the CLARIN Annual Conference 2022 / [ed] Tomaž Erjavec and Maria Eskevich, Linköping: Linköping University Electronic Press, 2023Konferensbidrag (Refereegranskat)
    Abstract [en]

    We investigate changes in the use of four Swedish words from the fields of design and archi- tecture. It has been suggested that their meanings have been blurred, especially in governmental reports and policy documents, so that distinctions between them that are important to stakeholders in the respective fields are lost. Specifically, we compare usage in two governmental public reports on design, one from 1999 and the other from 2015, and additionally in opinion responses to the 2015 report. Our approach is to contextualise occurrences of the words in different representations of the texts using word embeddings, topic modelling and sentiment analysis. Tools and language resources developed within the SweClarin infrastructure have been crucial for the implementation of the study.

  • 38.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska fakulteten.
    Megyesi, BeátaUppsala universitet, Institutionen för lingvistik och filologi.
    Proceedings of the Workshop on NLP and Pseudonymisation2019Proceedings (redaktörskap) (Refereegranskat)
    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 39.
    Ahrenberg, Lars
    et al.
    Linköping University, Sweden.
    Megyesi, BeátaUppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Proceedings of the Workshop on NLP and Pseudonymisation2019Proceedings (redaktörskap) (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 40.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    Merkel, Magnus
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    A knowledge-lite approach to word alignment2000Ingår i: Parallel Text Processing: Alignment and Use of Translation Corpora / [ed] Jean Veronis, Dordrecht, The Netherlands: Kluwer Academic Publishers, 2000, s. 97-116Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    The most promising approach to word alignment is to combine statistical methods with non-statistical information sources. Some of the proposed non-statistical sources, including bilingual dictionaries, POS-taggers and lemmatizers, rely on considerable linguistic knowledge, while other knowledge-lite sources such as cognate heuristics and word order heuristics can be implemented relatively easy. While knowledge-heavy sources might be expected to give better performance, knowledge-lite systems are easier to port to new language pairs and text types, and they can give sufficiently good results for many purposes, e.g. if the output is to be used by a human user for the creation of a complete word-aligned bitext. In this paper we describe the current status of the Linköping Word Aligner (LWA), which combines the use of statistical measures of co-occurrence with four knowledge-lite modules for (i)) word categorization, (ii) morphological variation, (iii) word order, and (iv) phrase recognition. We demonstrate the portability of the system (from English-Swedish texts to French-English texts) and present results for these two language-pairs. Finally, we will report observations from an error analysis of system output, and identify the major strengths and weaknesses of the system.

  • 41.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    Merkel, Magnus
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    Correspondence measures for MT evaluation.2000Ingår i: Proceedings of the Second International Conference on Linguistic Resources and Evaluation (LREC-2000, Paris, France: European Language Resources Association (ELRA) , 2000, s. 41-46Konferensbidrag (Refereegranskat)
  • 42.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Merkel, Magnus
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Ridings, Daniel
    Department of Swedish Language, Goteborg University, Goteborg Sweden.
    Sågvall Hein, Anna
    Department of Linguistics, Uppsala University, Uppsala Sweden.
    Tiedemann, Jörg
    Department of Linguistics, Uppsala University, Uppsala Sweden.
    Automatic Processing of Parallel Corpora: A Swedish Perspective1999Rapport (Övrigt vetenskapligt)
    Abstract [en]

    As empirical methods have come to the fore in multilingual language technology and translation studies, the processing of parallel texts and parallel corpora have become a major research area in computational linguistics. In this article we review the state of the art in alignment and data extraction techniques for parallel texts, and give an overview of current work in Sweden in this area. In a final section, we summarize the results achieved so far and make some proposals for future research.

    Ladda ner fulltext (pdf)
    fulltext
  • 43. Ahrenberg, Lars
    et al.
    Merkel, Magnus
    Sågvall Hein, Anna
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik.
    Tiedemann, Jörg
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik.
    Evaluation of LWA and UWA1999Rapport (Övrigt vetenskapligt)
  • 44.
    Ahrenberg, Lars
    et al.
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    Merkel, Magnus
    Linköpings universitet, Institutionen för datavetenskap, NLPLAB - Laboratoriet för databehandling av naturligt språk. Linköpings universitet, Tekniska högskolan.
    Sågvall Hein, Anna
    Institutionen för lingvistik, Uppsala universitet..
    Tiedemann, Jörg
    Institutionen för lingvistik, Uppsala universitet.
    Evaluation of word alignment systems2000Ingår i: Proceedings of the Second International Conference on Linguistic Resources and Evaluation (LREC-2000), Paris, France: European Language Resources Association (ELRA) , 2000, s. 1255-1261Konferensbidrag (Refereegranskat)
  • 45.
    Ait-Mlouk, Addi
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för beräkningsvetenskap. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Tillämpad beräkningsvetenskap.
    Alawadi, Sadi
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för beräkningsvetenskap. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Tillämpad beräkningsvetenskap.
    Toor, Salman
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för beräkningsvetenskap. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Tillämpad beräkningsvetenskap.
    Hellander, Andreas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för beräkningsvetenskap. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Tillämpad beräkningsvetenskap.
    FedQAS: Privacy-Aware Machine Reading Comprehension with Federated Learning2022Ingår i: Applied Sciences, E-ISSN 2076-3417, Vol. 12, nr 6, artikel-id 3130Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Machine reading comprehension (MRC) of text data is a challenging task in Natural Language Processing (NLP), with a lot of ongoing research fueled by the release of the Stanford Question Answering Dataset (SQuAD) and Conversational Question Answering (CoQA). It is considered to be an effort to teach computers how to "understand" a text, and then to be able to answer questions about it using deep learning. However, until now, large-scale training on private text data and knowledge sharing has been missing for this NLP task. Hence, we present FedQAS, a privacy-preserving machine reading system capable of leveraging large-scale private data without the need to pool those datasets in a central location. The proposed approach combines transformer models and federated learning technologies. The system is developed using the FEDn framework and deployed as a proof-of-concept alliance initiative. FedQAS is flexible, language-agnostic, and allows intuitive participation and execution of local model training. In addition, we present the architecture and implementation of the system, as well as provide a reference evaluation based on the SQuAD dataset, to showcase how it overcomes data privacy issues and enables knowledge sharing between alliance members in a Federated learning setting.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 46.
    Akrin, Christoffer
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap (from 2013).
    Tham, Simon
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap (from 2013).
    A Natural Language Interface for Querying Linked Data2020Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The thesis introduces a proof of concept idea that could spark great interest from many industries. The idea consists of a remote Natural Language Interface (NLI), for querying Knowledge Bases (KBs). The system applies natural language technology tools provided by the Stanford CoreNLP, and queries KBs with the use of the query language SPARQL. Natural Language Processing (NLP) is used to analyze the semantics of a question written in natural language, and generates relational information about the question. With correctly defined relations, the question can be queried on KBs containing relevant Linked Data. The Linked Data follows the Resource Description Framework (RDF) model by expressing relations in the form of semantic triples: subject-predicate-object.

    With our NLI, any KB can be understood semantically. By providing correct training data, the AI can learn to understand the semantics of the RDF data stored in the KB. The ability to understand the RDF data allows for the process of extracting relational information from questions about the KB. With the relational information, questions can be translated to SPARQL and be queried on the KB.

    Ladda ner fulltext (pdf)
    fulltext
  • 47.
    Al Dakkak, O.
    et al.
    Higher Institute of Applied Sciencenand Technology (HIAST).
    Ghneim, N.
    Higher Institute of Applied Sciencenand Technology (HIAST).
    Abou Zliekha, M.
    Damascus University/Faculty of Information Technology.
    Al Moubayed, Samer
    Damascus University/Faculty of Information Technology.
    Emotional Inclusion in An Arabic Text-To-Speech2005Ingår i: Proceedings of the 13th European Signal Processing Conference (EUSIPCO), Antalya, Turkey, 2005Konferensbidrag (Refereegranskat)
    Abstract [en]

    The goal of this paper is to present an emotional audio-visua lText to speech system for the Arabic Language. The system is based on two entities: un emotional audio text to speech system which generates speech depending on the input text and the desired emotion type, and un emotional Visual model which generates the talking heads, by forming the corresponding visemes. The phonemes to visemes mapping, and the emotion shaping use a 3-paramertic face model, based on the Abstract Muscle Model. We have thirteen viseme models and five emotions as parameters to the face model. The TTS produces the phonemes corresponding to the input text, the speech with the suitable prosody to include the prescribed emotion. In parallel the system generates the visemes and sends the controls to the facial model to get the animation of the talking head in real time.

  • 48.
    Al Dakkak, O.
    et al.
    HIAST, Damascus, Syria.
    Ghneim, N.
    HIAST, Damascus, Syria.
    Abou Zliekha, M.
    Damascus University.
    Al Moubayed, Samer
    Damascus University.
    Prosodic Feature Introduction and Emotion Incorporation in an Arabic TTS2006Ingår i: Proceedings of IEEE International Conference on Information and Communication Technologies, Damascus, Syria, 2006, s. 1317-1322Konferensbidrag (Refereegranskat)
    Abstract [en]

    Text-to-speech is a crucial part of many man-machine communication applications, such as phone booking and banking, vocal e-mail, and many other applications. In addition to many other applications concerning impaired persons, such as: reading machines for blinds, talking machines for persons with speech difficulties. However, the main drawback of most speech synthesizers in the talking machines, are their metallic sounds. In order to sound naturally, we have to incorporate prosodic features, as close as possible to natural prosody, this helps to improve the quality of the synthetic speech. Actual researches in the world are towards better "automatic prosody generation".

  • 49.
    Al Moubayed, Samer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT.
    Prosodic Disambiguation in Spoken Systems Output2009Ingår i: Proceedings of Diaholmia'09: 2009 Workshop on the Semantics and Pragmatics of Dialogue / [ed] Jens Edlund, Joakim Gustafson, Anna Hjalmarsson, Gabriel Skantze, Stockholm, Sweden., 2009, s. 131-132Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents work on using prosody in the output of spoken dialogue systems to resolve possible structural ambiguity of output utterances. An algorithm is proposed to discover ambiguous parses of an utterance and to add prosodic disambiguation events to deliver the intended structure. By conducting a pilot experiment, the automatic prosodic grouping applied to ambiguous sentences shows the ability to deliver the intended interpretation of the sentences.

  • 50.
    Al Moubayed, Samer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Towards rich multimodal behavior in spoken dialogues with embodied agents2013Ingår i: 4th IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2013 - Proceedings, IEEE Computer Society, 2013, s. 817-822Konferensbidrag (Refereegranskat)
    Abstract [en]

    Spoken dialogue frameworks have traditionally been designed to handle a single stream of data - the speech signal. Research on human-human communication has been providing large evidence and quantifying the effects and the importance of a multitude of other multimodal nonverbal signals that people use in their communication, that shape and regulate their interaction. Driven by findings from multimodal human spoken interaction, and the advancements of capture devices and robotics and animation technologies, new possibilities are rising for the development of multimodal human-machine interaction that is more affective, social, and engaging. In such face-to-face interaction scenarios, dialogue systems can have a large set of signals at their disposal to infer context and enhance and regulate the interaction through the generation of verbal and nonverbal facial signals. This paper summarizes several design decision, and experiments that we have followed in attempts to build rich and fluent multimodal interactive systems using a newly developed hybrid robotic head called Furhat, and discuss issues and challenges that this effort is facing.

1234567 1 - 50 av 2581
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf