Classification of imagined spoken word-pairs using convolutional neural networks

Research output: Contribution to conferencePaper

Abstract

Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37% and 60.88% vs. 57.80% (p<0.005).

Fingerprint

Brain computer interface
Neural networks
Decoding
Classifiers
Discriminant analysis
Electroencephalography
Learning systems
Feature extraction
Brain

Keywords

  • Electroencephalogram (EEG)
  • Imagined Speech
  • Convolutional Neural Network
  • Brain-Computer Interface

Cite this

Cooney, C., Korik, A., Raffaella, F., & Coyle, D. (Accepted/In press). Classification of imagined spoken word-pairs using convolutional neural networks. Paper presented at The 8th Graz BCI Conference, 2019, Graz, Austria.
@conference{7586ba4f3f044f7b859f1e6696999320,
title = "Classification of imagined spoken word-pairs using convolutional neural networks",
abstract = "Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37{\%} and 60.88{\%} vs. 57.80{\%} (p<0.005).",
keywords = "Electroencephalogram (EEG), Imagined Speech, Convolutional Neural Network, Brain-Computer Interface",
author = "Ciaran Cooney and Attila Korik and Folli Raffaella and Damien Coyle",
year = "2019",
month = "5",
day = "2",
language = "English",
note = "The 8th Graz BCI Conference, 2019 ; Conference date: 16-09-2019 Through 20-09-2023",
url = "https://www.tugraz.at/institute/ine/graz-bci-conferences/8th-graz-bci-conference-2019/",

}

Cooney, C, Korik, A, Raffaella, F & Coyle, D 2019, 'Classification of imagined spoken word-pairs using convolutional neural networks' Paper presented at The 8th Graz BCI Conference, 2019, Graz, Austria, 16/09/19 - 20/09/23, .

Classification of imagined spoken word-pairs using convolutional neural networks. / Cooney, Ciaran; Korik, Attila; Raffaella, Folli; Coyle, Damien.

2019. Paper presented at The 8th Graz BCI Conference, 2019, Graz, Austria.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Classification of imagined spoken word-pairs using convolutional neural networks

AU - Cooney, Ciaran

AU - Korik, Attila

AU - Raffaella, Folli

AU - Coyle, Damien

PY - 2019/5/2

Y1 - 2019/5/2

N2 - Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37% and 60.88% vs. 57.80% (p<0.005).

AB - Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37% and 60.88% vs. 57.80% (p<0.005).

KW - Electroencephalogram (EEG)

KW - Imagined Speech

KW - Convolutional Neural Network

KW - Brain-Computer Interface

M3 - Paper

ER -