This paper presents a wearable brain-computer interface relying on neurofeedback in extended reality for the enhancement of motor imagery training. Visual and vibrotactile feedback modalities were evaluated when presented either singularly or simultaneously. Only three acquisition channels and state-of-the-art vibrotactile chest-based feedback were employed. Experimental validation was carried out with eight subjects participating in two or three sessions on different days, with 360 trials per subject per session. Neurofeedback led to statistically significant improvement in performance over the two/three sessions, thus demonstrating for the first time functionality of a motor imagery-based instrument even by using an utmost wearable electroencephalograph and a commercial gaming vibrotactile suit. In the best cases, classification accuracy exceeded 80% with more than 20% improvement with respect to the initial performance. No feedback modality was generally preferable across the cohort study, but it is concluded that the best feedback modality may be subject-dependent.
Bibliographical noteFunding Information:
This work was carried out as part of the “ICT for Health” project, which was financially supported by the Italian Ministry of Education, University and Research (MIUR) , under the initiative ‘Departments of Excellence’ (Italian Budget Law no. 232/2016), through an excellence grant awarded to the Department of Information Technology and Electrical Engineering of the University of Naples Federico II, Naples, Italy. DC is supported by a UKRI Turing AI Fellowship 2021–2025 funded by the EPSRC (grant number EP/V025724/1 ). FD is supported by the project “Free energy principle and the brain: Neuronal and phylogenetic mechanisms of Bayesian inference” funded by the MIUR PRIN2020 - Grant N. 2020529PCP . The authors also thanks Leah Hudson for her proofreading of the work.
© 2022 Elsevier Ltd
- Brain–computer interface
- Extended reality
- Motor imagery