Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification
Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. (Linnaeus Centre HEAD)
Linköping University, Department of Clinical and Experimental Medicine, Division of Neuroscience. Linköping University, Faculty of Health Sciences.
Linköping University, Department of Clinical and Experimental Medicine, Division of Neuroscience. Linköping University, Faculty of Health Sciences.
2014 (English)In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 136, no 2, EL142-EL147 p.Article in journal (Refereed) Published
Abstract [en]

The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

Place, publisher, year, edition, pages
American Institute of Physics (AIP), 2014. Vol. 136, no 2, EL142-EL147 p.
Keyword [en]
Audiovisual training, audio training, speech-in-noise identification
National Category
Applied Psychology
Identifiers
URN: urn:nbn:se:liu:diva-108989DOI: 10.1121/1.4890200ISI: 000341178100014PubMedID: 25096138OAI: oai:DiVA.org:liu-108989DiVA: diva2:734642
Funder
Swedish Research Council, 006-6917
Available from: 2014-07-19 Created: 2014-07-19 Last updated: 2017-12-05Bibliographically approved
In thesis
1. Time is of the essence in speech perception!: Get it fast, or think about it
Open this publication in new window or tab >>Time is of the essence in speech perception!: Get it fast, or think about it
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Lyssna nu! : Hör rätt direkt, eller klura på det!
Abstract [en]

The present thesis examined the extent to which background noise influences the isolation point (IP, the shortest time from the onset of speech stimulus required for correct identification of that speech stimulus) and accuracy in identification of different types of speech stimuli (consonants, words, and final words in high-predictable [HP] and low-predictable [LP] sentences). These speech stimuli were presented in different modalities of presentation (auditory, visual, and audiovisual) to young normal-hearing listeners (Papers 1, 2, and 5). In addition, the present thesis studied under what conditions cognitive resources were explicitly demanded in identification of different types of speech stimuli (Papers 1 and 2). Further, elderly hearing-aid (EHA) users and elderly normal-hearing (ENH) listeners were compared with regard to the IPs, accuracy, and under what conditions explicit cognitive resources were demanded in identification of auditory speech stimuli in silence (Paper 3). The results showed that background noise resulted in later IPs and reduced the accuracy for the identification of different types of speech stimuli in both modalities of speech presentation. Explicit cognitive resources were demanded in identification of speech stimuli in the auditory-only modality, under the noisy condition, and in the absence of a prior semantic context. In addition, audiovisual presentation of speech stimuli resulted in earlier IPs and more accurate identification of speech stimuli than auditory presentation. Furthermore, a pre-exposure to audiovisual speech stimuli resulted in better auditory speech-in-noise identification than an exposure to auditory-only speech stimuli (Papers 2 and 4). When comparing EHA users and ENH individuals, the EHA users showed inferior performance in the identification of consonants, words, and final words in LP sentences (in terms of IP). In terms of accuracy, the EHA users demonstrated inferior performance only in the identification of consonants and words. Only the identification of consonants and words demanded explicit cognitive resources in the EHA users. Theoretical predictions and clinical implications were discussed.

Abstract [sv]

I denna avhandling undersöktes hur mycket bakgrundsbuller inverkar på isolationspunkten (IP, den tidigaste tidpunkt när ett talat stimulus kan identifieras korrekt) och exakthet i identifikation av olika typer av talade stimuli (konsonanter, ord, och ord i final position i högt predicerbara [HP] respektive lågt predicerbara [LP] meningar). Dessa talade stimuli presenterades i olika modaliteteter (auditivt, visuellt, och audiovisuellt) för unga normalhörande deltagare (Artikel 1, 2 och 5). Dessutom jämfördes under vilka betingelser explicita kognitiva resurser krävdes för identifikation av olika typer av talade stimuli (Artikel 1 och 2). Vidare jämfördes äldre hörapparatsanvändare (EHA) och äldre normalhörande (ENH) personer med avseende på IP, exakthet i identifikation, och under vilka betingelser explicita kognitiva resurser krävdes för auditiv identifikation i tystnad (d.v.s. utan bakgrundsbuller) (Artikel 3). Resultaten visade att bakgrundsbuller gav senare IP och sänkte exaktheten för identifikation av olika typer av talade stimuli och i båda modaliteterna för presentation. Explicita kognitiva resurser krävdes vid identifikation av talade stimuli vid rent auditiv presentation med bakgrundsbuller, och när ingen semantisk förhandsinformation presenterades. Dessutom resulterade audiovisuell presentation i tidigare IP och mer exakt identifikation av talade stimuli, jämfört med rent auditiv presentation. Ett ytterligare resultat var att förexponering av audiovisuella talade stimuli resulterade i bättre identifikation av tal i bakgrundsbrus, jämfört med förexponering av enbart auditiva talade stimuli (Artikel 2 och 4). Vid jämförelse av EHA-användare och ENH-personer, hade EHA-användare senare IP i identifikation av konsonanter, ord, och ord i final position i LP-meningar. Dessutom hade EHA-användare mindre exakt identifikation av konsonanter och ord. Endast identifikation av konsonanter och ord krävde explicita kognitiva resurser hos EHA-användare. Teoretiska prediktioner och kliniska implikationer diskuterades.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2014. 56 p.
Series
Linköping Studies in Arts and Science, ISSN 0282-9800 ; 635Studies from the Swedish Institute for Disability Research, ISSN 1650-1128 ; 68
Keyword
Noise, auditory speech perception, audiovisual speech perception, hearing aids, Buller, hörsel, auditiv talperception, audiovisuell talperception, hörhjälpmedel
National Category
Other Medical Sciences Public Health, Global Health, Social Medicine and Epidemiology
Identifiers
urn:nbn:se:liu:diva-111723 (URN)10.3384/diss.diva-111723 (DOI)978-91-7519-188-1 (ISBN)
Public defence
2014-11-28, I:101, Hus I, Campus Valla, Linköpings universitet, Linköping, 14:00 (Swedish)
Supervisors
Available from: 2014-10-29 Created: 2014-10-29 Last updated: 2014-10-31Bibliographically approved

Open Access in DiVA

fulltext(183 kB)149 downloads
File information
File name FULLTEXT01.pdfFile size 183 kBChecksum SHA-512
bc068c04636d601db4b2189ea9d7bde96d514f8b0e814b8fe30b143f8ff139acd7443ed61f9eac1eb27faa16f1bd07bb351b3d4d9c8b6cb56982fec8c51348fd
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMed

Search in DiVA

By author/editor
Lidestam, BjörnMoradi, ShahramPettersson, RasmusRicklefs, Theodor
By organisation
PsychologyFaculty of Arts and SciencesDisability ResearchDivision of NeuroscienceFaculty of Health Sciences
In the same journal
Journal of the Acoustical Society of America
Applied Psychology

Search outside of DiVA

GoogleGoogle Scholar
Total: 149 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 169 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf