Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0002-6756-0147
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0002-3785-8380
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0001-8532-0895
Luleå tekniska universitet, EISLAB.ORCID iD: 0000-0003-0221-8268
Show others and affiliations
2023 (English)In: Scientific Data, E-ISSN 2052-4463, Vol. 10, no 1, article id 378Article in journal (Refereed) Published
Abstract [en]

The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.

Place, publisher, year, edition, pages
Springer Nature , 2023. Vol. 10, no 1, article id 378
National Category
Computer Sciences Computer graphics and computer vision
Research subject
Machine Learning
Identifiers
URN: urn:nbn:se:mau:diva-75695DOI: 10.1038/s41597-023-02286-wISI: 001006100600001PubMedID: 37311807Scopus ID: 2-s2.0-85161923014OAI: oai:DiVA.org:mau-75695DiVA, id: diva2:1955481
Note

Validerad;2023;Nivå 2;2023-06-13 (hanlid);

Funder: Grants for Excellent Research Projects Proposals of SRT.ai 2022

Available from: 2025-04-30 Created: 2025-04-30 Last updated: 2025-05-07Bibliographically approved

Open Access in DiVA

fulltext(5686 kB)15 downloads
File information
File name FULLTEXT01.pdfFile size 5686 kBChecksum SHA-512
8d557619cc77862ab9995d7f4530eabe057a083a9f0ac014515998c1792b529f996fadbee098ef22b29b54a0dc5aa64048bf08a52605e89d4bdcab76f8f12187
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMedScopusFulltext

Search in DiVA

By author/editor
Simistira Liwicki, FoteiniGupta, VibhaSaini, RajkumarDe, KanjarAbid, NosheenRakesh, SumitWellington, ScottLiwicki, Marcus
In the same journal
Scientific Data
Computer SciencesComputer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 17 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 82 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf