Show simple item record

dc.contributor.authorChowdhury, Radia Rayan
dc.contributor.authorMuhammad, Yar
dc.contributor.authorAdeel, Usman
dc.date.accessioned2023-09-25T13:45:03Z
dc.date.available2023-09-25T13:45:03Z
dc.date.issued2023-09-15
dc.identifier.citationChowdhury , R R , Muhammad , Y & Adeel , U 2023 , ' Enhancing Cross-Subject Motor Imagery Classification in EEG-Based Brain–Computer Interfaces by Using Multi-Branch CNN ' , Sensors , vol. 23 , no. 18 , 7908 . https://doi.org/10.3390/s23187908
dc.identifier.issn1424-3210
dc.identifier.otherORCID: /0000-0002-2281-0886/work/143285592
dc.identifier.otherJisc: 1389546
dc.identifier.urihttp://hdl.handle.net/2299/26728
dc.description© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
dc.description.abstractA brain–computer interface (BCI) is a computer-based system that allows for communication between the brain and the outer world, enabling users to interact with computers using neural activity. This brain signal is obtained from electroencephalogram (EEG) signals. A significant obstacle to the development of BCIs based on EEG is the classification of subject-independent motor imagery data since EEG data are very individualized. Deep learning techniques such as the convolutional neural network (CNN) have illustrated their influence on feature extraction to increase classification accuracy. In this paper, we present a multi-branch (five branches) 2D convolutional neural network that employs several hyperparameters for every branch. The proposed model achieved promising results for cross-subject classification and outperformed EEGNet, ShallowConvNet, DeepConvNet, MMCNN, and EEGNet_Fusion on three public datasets. Our proposed model, EEGNet Fusion V2, achieves 89.6% and 87.8% accuracy for the actual and imagined motor activity of the eegmmidb dataset and scores of 74.3% and 84.1% for the BCI IV-2a and IV-2b datasets, respectively. However, the proposed model has a bit higher computational cost, i.e., it takes around 3.5 times more computational time per sample than EEGNet_Fusion.en
dc.format.extent16
dc.format.extent395401
dc.language.isoeng
dc.relation.ispartofSensors
dc.subjectConvolutional neural network (CNN)
dc.subjectBrain–computer interface (BCI)
dc.subjectDeep Learning
dc.subjectfusion network
dc.subjectmotor imagery (MI)
dc.subjectElectroencephalography (EEG)
dc.subjectbrain–computer interface (BCI)
dc.subjectelectroencephalography (EEG)
dc.subjectdeep learning
dc.subjectconvolutional neural network (CNN)
dc.subjectBrain-Computer Interfaces
dc.subjectBrain
dc.subjectNeural Networks, Computer
dc.subjectElectroencephalography
dc.subjectCommunication
dc.subjectAnalytical Chemistry
dc.subjectInformation Systems
dc.subjectInstrumentation
dc.subjectAtomic and Molecular Physics, and Optics
dc.subjectElectrical and Electronic Engineering
dc.subjectBiochemistry
dc.titleEnhancing Cross-Subject Motor Imagery Classification in EEG-Based Brain–Computer Interfaces by Using Multi-Branch CNNen
dc.contributor.institutionBiocomputation Research Group
dc.contributor.institutionSchool of Physics, Engineering & Computer Science
dc.contributor.institutionDepartment of Computer Science
dc.description.statusPeer reviewed
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85172725217&partnerID=8YFLogxK
rioxxterms.versionofrecord10.3390/s23187908
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record