Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG

Research output: Contribution to conferencePaper

165 Downloads (Pure)

Abstract

A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68%; chance: 20%).
Original languageEnglish
Number of pages6
Publication statusAccepted/In press - 2019
EventIEEE International Conference on Systems, Man, and Cybernetics, 2019: Industry 4.0 - Bari, Italy
Duration: 6 Oct 20199 Oct 2019
http://smc2019.org/

Conference

ConferenceIEEE International Conference on Systems, Man, and Cybernetics, 2019
Abbreviated titleIEEE SMC 2019
CountryItaly
CityBari
Period6/10/199/10/19
Internet address

Keywords

  • Electroencephalogram (EEG)
  • Imagined Speech
  • Convolutional Neural Network
  • Deep Learning
  • Transfer Learning
  • brain computer interface

Fingerprint Dive into the research topics of 'Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG'. Together they form a unique fingerprint.

  • Cite this

    Cooney, C., Raffaella, F., & Coyle, D. (Accepted/In press). Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. Paper presented at IEEE International Conference on Systems, Man, and Cybernetics, 2019, Bari, Italy.