Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG

Research output: Contribution to conferencePaper

Abstract

A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68%; chance: 20%).

Conference

ConferenceIEEE International Conference on Systems, Man, and Cybernetics, 2019
Abbreviated titleIEEE SMC 2019
CountryItaly
CityBari
Period6/10/199/10/19
Internet address

Fingerprint

Electroencephalography
Decoding
Brain computer interface
Neural networks
Learning systems
Tuning

Keywords

  • Electroencephalogram (EEG)
  • Imagined Speech
  • Convolutional Neural Network
  • Deep Learning
  • Transfer Learning
  • brain computer interface

Cite this

Cooney, C., Raffaella, F., & Coyle, D. (Accepted/In press). Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. Paper presented at IEEE International Conference on Systems, Man, and Cybernetics, 2019, Bari, Italy.
Cooney, Ciaran ; Raffaella, Folli ; Coyle, Damien. / Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. Paper presented at IEEE International Conference on Systems, Man, and Cybernetics, 2019, Bari, Italy.6 p.
@conference{15581b40d99c4f8badd704fbb46ac6e1,
title = "Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG",
abstract = "A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68{\%}; chance: 20{\%}).",
keywords = "Electroencephalogram (EEG), Imagined Speech, Convolutional Neural Network, Deep Learning, Transfer Learning, brain computer interface",
author = "Ciaran Cooney and Folli Raffaella and Damien Coyle",
year = "2019",
language = "English",
note = "IEEE International Conference on Systems, Man, and Cybernetics, 2019 : Industry 4.0, IEEE SMC 2019 ; Conference date: 06-10-2019 Through 09-10-2019",
url = "http://smc2019.org/",

}

Cooney, C, Raffaella, F & Coyle, D 2019, 'Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG' Paper presented at IEEE International Conference on Systems, Man, and Cybernetics, 2019, Bari, Italy, 6/10/19 - 9/10/19, .

Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. / Cooney, Ciaran; Raffaella, Folli; Coyle, Damien.

2019. Paper presented at IEEE International Conference on Systems, Man, and Cybernetics, 2019, Bari, Italy.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG

AU - Cooney, Ciaran

AU - Raffaella, Folli

AU - Coyle, Damien

PY - 2019

Y1 - 2019

N2 - A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68%; chance: 20%).

AB - A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68%; chance: 20%).

KW - Electroencephalogram (EEG)

KW - Imagined Speech

KW - Convolutional Neural Network

KW - Deep Learning

KW - Transfer Learning

KW - brain computer interface

UR - http://smc2019.org/index.html

M3 - Paper

ER -

Cooney C, Raffaella F, Coyle D. Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. 2019. Paper presented at IEEE International Conference on Systems, Man, and Cybernetics, 2019, Bari, Italy.