Classification of imagined spoken word-pairs using convolutional neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37% and 60.88% vs. 57.80% (p<0.005).
LanguageEnglish
Title of host publicationProceedings of the 8th Graz Brain Computer Interface Conference 2019
Subtitle of host publicationBridging Science and Application
EditorsGernot R Muller-Putz, Jonas C Ditz, Selina C Wriessnegger
Pages338-343
Volume2019
DOIs
Publication statusPublished - 20 Sep 2019
EventThe 8th Graz BCI Conference, 2019 - Institute of Neural Engineering, Graz University of Technology, Graz, Austria
Duration: 16 Sep 201920 Sep 2023
https://www.tugraz.at/institute/ine/graz-bci-conferences/8th-graz-bci-conference-2019/

Publication series

NameProceedings of the 8th Graz Brain-Computer Interface Conference 2019
PublisherGraz University of Technology
ISSN (Print)2311-0422

Conference

ConferenceThe 8th Graz BCI Conference, 2019
CountryAustria
CityGraz
Period16/09/1920/09/23
Internet address

Fingerprint

Brain computer interface
Neural networks
Decoding
Classifiers
Discriminant analysis
Electroencephalography
Learning systems
Feature extraction
Brain

Keywords

  • Electroencephalogram (EEG)
  • Imagined Speech
  • Convolutional Neural Network
  • Brain-Computer Interface

Cite this

Cooney, C., Korik, A., Raffaella, F., & Coyle, D. (2019). Classification of imagined spoken word-pairs using convolutional neural networks. In G. R. Muller-Putz, J. C. Ditz, & S. C. Wriessnegger (Eds.), Proceedings of the 8th Graz Brain Computer Interface Conference 2019: Bridging Science and Application (Vol. 2019, pp. 338-343). (Proceedings of the 8th Graz Brain-Computer Interface Conference 2019). https://doi.org/10.3217/978-3-85125-682-6-62
Cooney, Ciaran ; Korik, Attila ; Raffaella, Folli ; Coyle, Damien. / Classification of imagined spoken word-pairs using convolutional neural networks. Proceedings of the 8th Graz Brain Computer Interface Conference 2019: Bridging Science and Application. editor / Gernot R Muller-Putz ; Jonas C Ditz ; Selina C Wriessnegger. Vol. 2019 2019. pp. 338-343 (Proceedings of the 8th Graz Brain-Computer Interface Conference 2019).
@inproceedings{7586ba4f3f044f7b859f1e6696999320,
title = "Classification of imagined spoken word-pairs using convolutional neural networks",
abstract = "Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37{\%} and 60.88{\%} vs. 57.80{\%} (p<0.005).",
keywords = "Electroencephalogram (EEG), Imagined Speech, Convolutional Neural Network, Brain-Computer Interface",
author = "Ciaran Cooney and Attila Korik and Folli Raffaella and Damien Coyle",
year = "2019",
month = "9",
day = "20",
doi = "10.3217/978-3-85125-682-6-62",
language = "English",
isbn = "978-3-85125-682-6",
volume = "2019",
series = "Proceedings of the 8th Graz Brain-Computer Interface Conference 2019",
publisher = "Graz University of Technology",
pages = "338--343",
editor = "Muller-Putz, {Gernot R} and Ditz, {Jonas C} and Wriessnegger, {Selina C}",
booktitle = "Proceedings of the 8th Graz Brain Computer Interface Conference 2019",

}

Cooney, C, Korik, A, Raffaella, F & Coyle, D 2019, Classification of imagined spoken word-pairs using convolutional neural networks. in GR Muller-Putz, JC Ditz & SC Wriessnegger (eds), Proceedings of the 8th Graz Brain Computer Interface Conference 2019: Bridging Science and Application. vol. 2019, Proceedings of the 8th Graz Brain-Computer Interface Conference 2019, pp. 338-343, The 8th Graz BCI Conference, 2019, Graz, Austria, 16/09/19. https://doi.org/10.3217/978-3-85125-682-6-62

Classification of imagined spoken word-pairs using convolutional neural networks. / Cooney, Ciaran; Korik, Attila; Raffaella, Folli; Coyle, Damien.

Proceedings of the 8th Graz Brain Computer Interface Conference 2019: Bridging Science and Application. ed. / Gernot R Muller-Putz; Jonas C Ditz; Selina C Wriessnegger. Vol. 2019 2019. p. 338-343 (Proceedings of the 8th Graz Brain-Computer Interface Conference 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Classification of imagined spoken word-pairs using convolutional neural networks

AU - Cooney, Ciaran

AU - Korik, Attila

AU - Raffaella, Folli

AU - Coyle, Damien

PY - 2019/9/20

Y1 - 2019/9/20

N2 - Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37% and 60.88% vs. 57.80% (p<0.005).

AB - Imagined speech is gaining traction as a communicative paradigm for brain-computer-interfaces (BCI), as a growing body of research indicates the potential for decoding speech processes directly from the brain. The development of this type of direct-speech BCI has primarily considered feature extraction and machine learning approaches typical to BCI decoding. Here, we consider the potential of deep learning as a possible alternative to traditional BCI methodologies in relation to imagined speech EEG decoding. Two different convolutional neural networks (CNN) were trained on multiple imagined speech word-pairs, and their performance compared to a baseline linear discriminant analysis (LDA) classifier trained using filterbank common spatial patterns (FBCSP) features. Classifiers were trained using nested cross-validation to enable hyper-parameter optimization. Results obtained showed that the CNNs outperformed the FBCSP with average accuracies of 62.37% and 60.88% vs. 57.80% (p<0.005).

KW - Electroencephalogram (EEG)

KW - Imagined Speech

KW - Convolutional Neural Network

KW - Brain-Computer Interface

UR - http://diglib.tugraz.at/proceedings-of-the-8th-graz-brain-computer-interface-conference-2019-bridging-science-and-application-2019

U2 - 10.3217/978-3-85125-682-6-62

DO - 10.3217/978-3-85125-682-6-62

M3 - Conference contribution

SN - 978-3-85125-682-6

VL - 2019

T3 - Proceedings of the 8th Graz Brain-Computer Interface Conference 2019

SP - 338

EP - 343

BT - Proceedings of the 8th Graz Brain Computer Interface Conference 2019

A2 - Muller-Putz, Gernot R

A2 - Ditz, Jonas C

A2 - Wriessnegger, Selina C

ER -

Cooney C, Korik A, Raffaella F, Coyle D. Classification of imagined spoken word-pairs using convolutional neural networks. In Muller-Putz GR, Ditz JC, Wriessnegger SC, editors, Proceedings of the 8th Graz Brain Computer Interface Conference 2019: Bridging Science and Application. Vol. 2019. 2019. p. 338-343. (Proceedings of the 8th Graz Brain-Computer Interface Conference 2019). https://doi.org/10.3217/978-3-85125-682-6-62