Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Recursive Subtree Composition in LSTM-Based Dependency Parsing
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Languages, Department of Linguistics and Philology. (Computational Linguistics)ORCID iD: 0000-0001-8844-2126
IBM. (IBM Research AI)
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Languages, Department of Linguistics and Philology. (Computational Linguistics)
2019 (English)In: Proceedings of the 2019 Conference of the North American Chapter of the Associationfor Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),, 2019, p. 1566-1576Conference paper, Published paper (Refereed)
Abstract [en]

The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.

Place, publisher, year, edition, pages
2019. p. 1566-1576
Keywords [en]
dependency parsing, recursive neural networks, recurrent neural networks, long short-term memory networks
National Category
General Language Studies and Linguistics
Research subject
Computational Linguistics
Identifiers
URN: urn:nbn:se:uu:diva-395676OAI: oai:DiVA.org:uu-395676DiVA, id: diva2:1364985
Conference
2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Minneapolis, June 2-7, 2019.
Available from: 2019-10-23 Created: 2019-10-23 Last updated: 2019-11-01Bibliographically approved

Open Access in DiVA

fulltext(348 kB)2 downloads
File information
File name FULLTEXT01.pdfFile size 348 kBChecksum SHA-512
2eb36d987ec53a12d1ffaa7ad256929faa957ef69fe4024180d8159863310c44d2fd17fb7140e5f77caef6e482e3fa21ff631dc0cf50e86b34ba38f5bfbbd949
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
de Lhoneux, MiryamNivre, Joakim
By organisation
Department of Linguistics and Philology
General Language Studies and Linguistics

Search outside of DiVA

GoogleGoogle Scholar
Total: 2 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 6 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf