Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
SpaceRefNet: a neural approach to spatial reference resolution in a real city environment
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0001-7327-3059
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-2600-7668
2019 (English)In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, Association for Computational Linguistics , 2019, p. 422-431Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Adding interactive capabilities to pedestrian wayfinding systems in the form of spoken dialogue will make them more natural to humans. Such an interactive wayfinding system needs to continuously understand and interpret pedestrian’s utterances referring to the spatial context. Achieving this requires the system to identify exophoric referring expressions in the utterances, and link these expressions to the geographic entities in the vicinity. This exophoric spatial reference resolution problem is difficult, as there are often several dozens of candidate referents. We present a neural network-based approach for identifying pedestrian’s references (using a network called RefNet) and resolving them to appropriate geographic objects (using a network called SpaceRefNet). Both methods show promising results beating the respective baselines and earlier reported results in the literature.

Place, publisher, year, edition, pages
Association for Computational Linguistics , 2019. p. 422-431
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-262883OAI: oai:DiVA.org:kth-262883DiVA, id: diva2:1363222
Conference
20th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2019, 11-13 September 2019, Stockholm, Sweden
Note

QC 20191022

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2019-10-22Bibliographically approved

Open Access in DiVA

fulltext(1630 kB)10 downloads
File information
File name FULLTEXT01.pdfFile size 1630 kBChecksum SHA-512
6ce18ca70d4e0a05e57bd21320433d24eee7f3529f88ca43a8fd64025b1f2c76c61879ac23f9d1b1b54827fcd91c70b4fe23587c0c459bb858c34166e484b4f6
Type fulltextMimetype application/pdf

Other links

Published version

Search in DiVA

By author/editor
Kalpakchi, DmytroBoye, Johan
By organisation
Speech, Music and Hearing, TMH
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 10 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 45 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf