Abstract
A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68%; chance: 20%).
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Accepted/In press - 2019 |
Event | IEEE International Conference on Systems, Man, and Cybernetics, 2019: Industry 4.0 - Bari, Italy Duration: 6 Oct 2019 → 9 Oct 2019 http://smc2019.org/ |
Conference
Conference | IEEE International Conference on Systems, Man, and Cybernetics, 2019 |
---|---|
Abbreviated title | IEEE SMC 2019 |
Country/Territory | Italy |
City | Bari |
Period | 6/10/19 → 9/10/19 |
Internet address |
Keywords
- Electroencephalogram (EEG)
- Imagined Speech
- Convolutional Neural Network
- Deep Learning
- Transfer Learning
- brain computer interface