Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Making Chatbots More Conversational: Using Follow-Up Questions for Maximizing the Informational Value in Evaluation Responses
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology.
2019 (English)Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

This thesis contains a detailed outline of a system that analyzes textual conversation-based evaluation responses in an attempt to maximize the extraction of informational value. This is achieved by asking intelligent follow-up questions where the system deems it to be necessary. The system is realized as a multistage rocket. The first step utilizes a neural network trained on manually extracted linguistic features of an evaluation answer to assess whether a follow-up question is required. Next, what question to ask is determined by a separate network which employs word embeddings to make appropriate question predictions. Finally, the grammatical structure of the answer is analyzed to resolve how the predicted question should be composed in terms of grammatical properties such as tense and plurality to make it as natural-sounding and human as possible. The resulting system was overall satisfactory. Exposing the system to evaluation answers in the educational sector caused it, in the majority of cases, to ask relevant follow-up questions which dived deeper into the users' answer. The domain-narrow and meager amount of annotated data naturally made the system very domain-specific which is partially counteracted by the use of secondary predictions and default fallback questions. However, using the system with any unrestricted answer out-of-the-box is likely to generate a very vague question-response. With a substantial increase in the amount of annotated data, this would most likely be vastly improved.

Place, publisher, year, edition, pages
2019. , p. 79
Series
UPTEC IT, ISSN 1401-5749 ; 19012
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:uu:diva-393269OAI: oai:DiVA.org:uu-393269DiVA, id: diva2:1352380
Educational program
Master of Science Programme in Information Technology Engineering
Supervisors
Examiners
Available from: 2019-09-18 Created: 2019-09-18 Last updated: 2019-09-18Bibliographically approved

Open Access in DiVA

fulltext(4817 kB)921 downloads
File information
File name FULLTEXT01.pdfFile size 4817 kBChecksum SHA-512
67742738624eb3c6a3bb9199f5493da7d9dfaa555595a676c5056a1cdd0fc7d6a429f54a4ab27b4768b25bf17900d4cb914e1ab5963c5a3ecc74c0350d3cf411
Type fulltextMimetype application/pdf

By organisation
Department of Information Technology
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 922 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 637 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf