EEG-based Brain Computer Interface with Deep Learning

Loading...
Thumbnail Image

Authors

Zhang, Patrick

Date

Type

thesis

Language

eng

Keyword

EEG , Brain Computer Interface , Deep Learning , Semi-Supervised Learning , Riemannian Metrics Learning , Capsule Network , Long Short-Term Memory , Biosignal Processing

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

Electroencephalogram (EEG)-based Brain Computer Interfaces (BCI) are very important and have been widely used in multiple application domains, ranging from human-computer interaction to medical and biomedical applications. To overcome the challenge of EEG representation learning, we propose a Long Short-Term Memory (LSTM) network with an attention mechanism to learn the importance of EEG information varying through time, where discriminative information with higher importance is assigned higher scores to better contribute to the classification performance. Our model significantly outperforms the state-of-the-art solutions for hand movement classification. Moreover, we provide a generalized solution for various BCI applications by effectively learning EEG representations. Our solution learns spatial information (Riemannian mean and distance) from spatial covariance matrices on a Riemannian manifold. It also learns the temporal information via EEG features extracted from signals in consecutive time periods in Euclidean space. Finally, we use an effective fusion strategy to combine the spatial and temporal information. Our method performs excellently in all the experiments (emotion recognition, motor imagery classification, and vigilance estimation), approaching the state-of-the-art results in one dataset (SEED) and considerably outperforming the best results of existing works in the other three datasets (BCI-IV 2A, BCI-IV 2B, and SEED-VIG), setting a new state-of-the-art result. We then tackle the problem of multimodal learning and propose an architecture composed of a capsule attention mechanism following a deep LSTM network for multimodal EEG and Electrooculogram (EOG) learning. Our model learns hierarchical dependencies in the data through the LSTM and capsule feature representation layers. Experiments show that our model is robust to noise and capable of identifying correlated and independent information existing in EEG and EOG. Moreover, the robustness of our method is demonstrated by outperforming other solutions and baseline techniques, setting a new state of the art in vigilance estimation dataset (SEED-VIG). Finally, to overcome the challenge of shortage of labeled training data, we propose a novel semi-supervised architecture for learning reliable EEG representations. Our model uses pairwise representation alignment to reduce the potential distribution mismatch between large amounts of unlabeled data and a limited number of labeled data. The study shows that our approach obtains strong results, outperforming other methods in the majority of few-labeled experimental conditions across multiple emotion recognition datasets (SEED, SEED-IV, SEED-V, AMIGOS).

Description

Citation

Publisher

License

Queen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canada
ProQuest PhD and Master's Theses International Dissemination Agreement
Intellectual Property Guidelines at Queen's University
Copying and Preserving Your Thesis
This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
Attribution-NonCommercial-NoDerivs 3.0 United States

Journal

Volume

Issue

PubMed ID

External DOI

ISSN

EISSN