Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure
Number of Authors: 3
2012 (English)Conference paper (Refereed)
It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.
Place, publisher, year, edition, pages
Computer and Information Science
IdentifiersURN: urn:nbn:se:ri:diva-15211OAI: oai:DiVA.org:ri-15211DiVA: diva2:1036527
The 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2012)