Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
SpaceRefNet: a neural approach to spatial reference resolution in a real city environment
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0001-7327-3059
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-2600-7668
2019 (English)In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, Association for Computational Linguistics , 2019, p. 422-431Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Adding interactive capabilities to pedestrian wayfinding systems in the form of spoken dialogue will make them more natural to humans. Such an interactive wayfinding system needs to continuously understand and interpret pedestrian’s utterances referring to the spatial context. Achieving this requires the system to identify exophoric referring expressions in the utterances, and link these expressions to the geographic entities in the vicinity. This exophoric spatial reference resolution problem is difficult, as there are often several dozens of candidate referents. We present a neural network-based approach for identifying pedestrian’s references (using a network called RefNet) and resolving them to appropriate geographic objects (using a network called SpaceRefNet). Both methods show promising results beating the respective baselines and earlier reported results in the literature.

Place, publisher, year, edition, pages
Association for Computational Linguistics , 2019. p. 422-431
National Category
Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-262883DOI: 10.18653/v1/w19-5949ISI: 000591510500049Scopus ID: 2-s2.0-85091595033OAI: oai:DiVA.org:kth-262883DiVA, id: diva2:1363222
Conference
20th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2019, 11-13 September 2019, Stockholm, Sweden
Note

QC 20210914

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(1630 kB)298 downloads
File information
File name FULLTEXT01.pdfFile size 1630 kBChecksum SHA-512
6ce18ca70d4e0a05e57bd21320433d24eee7f3529f88ca43a8fd64025b1f2c76c61879ac23f9d1b1b54827fcd91c70b4fe23587c0c459bb858c34166e484b4f6
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusPublished version

Search in DiVA

By author/editor
Kalpakchi, DmytroBoye, Johan
By organisation
Speech, Music and Hearing, TMH
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 298 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 411 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf