Brain-computer interfaces (BCIs) have received considerable attention in gaming, enabling innovative interactions with digital environments. Visual Evoked Potentials (VEPs)—robust, noninvasive neural responses to visual stimuli—offer high information transfer rates, making them particularly promising. This systematic review, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework, examines VEP-based BCIs in gaming. We searched the Web of Science and Google Scholar, identifying 16 347 studies from the past decade, with 46 selected for in-depth analysis after rigorous screening. The review explores VEP response modeling, electroencephalography (EEG) signal acquisition and processing, stimulation paradigms, and their gaming applications. These systems enhance accessibility for players with physical or cognitive impairments, support adaptive difficulty scaling, personalize gameplay, aid neurorehabilitation, and enable multiplayer interactions. However, challenges remain, including technical limitations, complex data interpretation, user adaptability, and ergonomic issues. Advances in signal processing, personalized calibration, and hybrid multimodal approaches could improve usability. Future research should focus on integrating VEP-based BCIs with emerging technologies, optimizing user comfort, and developing adaptive interaction models to enhance immersion and accessibility. By addressing these challenges and utilizing neuroscience and computational advancements, VEP-based BCIs promise to transform gaming into a more inclusive and immersive experience for diverse users.
@ARTICLE{Keutayeva_Neurotech_2025,
author={Keutayeva, Aigerim and Jesse Nwachukwu, China and Alaran, Muslim and Otarbay, Zhenis and Abibullaev, Berdakh},
journal={IEEE Access},
title={Neurotechnology in Gaming: A Systematic Review of Visual Evoked Potential-Based Brain-Computer Interfaces},
year={2025},
volume={13},
number={},
pages={74944-74966},
doi={10.1109/ACCESS.2025.3564328}}
The review explores VEP response modeling, electroencephalography (EEG) signal acquisition and processing, stimulation paradigms, and their gaming applications.
Motor imagery electroencephalography (EEG) analysis is crucial for the development of effective brain-computer interfaces (BCIs), yet it presents considerable challenges due to the complexity of the data and inter-subject variability. This paper introduces EEGCCT, an application of compact convolutional transformers designed specifically to improve the analysis of motor imagery tasks in EEG. Unlike traditional approaches, EEGCCT model significantly enhances generalization from limited data, effectively addressing a common limitation in EEG datasets. We validate and test our models using the open-source BCI Competition IV datasets 2a and 2b, employing a Leave-One-Subject-Out (LOSO) strategy to ensure subject-independent performance. Our findings demonstrate that EEGCCT not only outperforms conventional models like EEGNet in standard evaluations but also achieves better performance compared to other advanced models such as Conformer, Hybrid s-CViT, and Hybrid t-CViT, while utilizing fewer parameters and achieving an accuracy of 70.12%. Additionally, the paper presents a comprehensive ablation study that includes targeted data augmentation, hyperparameter optimization, and architectural improvements.
@article{keutayeva2024compact,
title={Compact convolutional transformer for subject-independent motor imagery EEG-based BCIs},
author={Keutayeva, Aigerim and Fakhrutdinov, Nail and Abibullaev, Berdakh},
journal={Scientific Reports},
volume={14},
number={1},
pages={25775},
year={2024},
publisher={Nature Publishing Group UK London}
}
This paper introduces EEGCCT, an application of compact convolutional transformers designed specifically to improve the analysis of motor imagery tasks in EEG.
This work reviews the critical challenge of data scarcity in developing Transformer-based models for Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs), specifically focusing on Motor Imagery (MI) decoding. While EEG-BCIs hold immense promise for applications in communication, rehabilitation, and human-computer interaction, limited data availability hinders the use of advanced deep-learning models such as Transformers. In particular, this paper comprehensively analyzes three key strategies to address data scarcity: data augmentation, transfer learning, and the inherent attention mechanisms of Transformers. Data augmentation techniques artificially expand datasets, enhancing model generalizability by exposing them to a wider range of signal patterns. Transfer learning utilizes pre-trained models from related domains, leveraging their learned knowledge to overcome the limitations of small EEG datasets. By thoroughly reviewing current research and methodologies, this work underscores the importance of these strategies in overcoming data scarcity. It critically examines the limitations imposed by limited datasets and showcases potential solutions being developed to address these challenges. This comprehensive survey, focusing on the intersection of data scarcity and technological advancements, aims to provide a critical analysis of the current state-of-the-art in EEG-BCI development. By identifying research gaps and suggesting future directions, the paper encourages further exploration and innovation in this field. Ultimately, this work aims to contribute to the advancement of more accessible, efficient, and accurate EEG-BCI systems by addressing the fundamental challenge of data scarcity.
@ARTICLE{Keutayeva_Review_2024,
author={Keutayeva, Aigerim and Abibullaev, Berdakh},
journal={IEEE Access},
title={Data Constraints and Performance Optimization for Transformer-Based Models in EEG-Based Brain-Computer Interfaces: A Survey},
year={2024},
volume={12},
number={},
pages={62628-62647},
doi={10.1109/ACCESS.2024.3394696}}
This work reviews the critical challenge of data scarcity in developing Transformer-based models for EEG-based BCIs, specifically focusing on Motor Imagery decoding.
Brain-computer interfaces (BCIs) have undergone significant advancements in recent years. The integration of deep learning techniques, specifically transformers, has shown promising development in research and application domains. Transformers, which were originally designed for natural language processing, have now made notable inroads into BCIs, offering a unique self-attention mechanism that adeptly handles the temporal dynamics of brain signals. This comprehensive survey delves into the application of transformers in BCIs, providing readers with a lucid understanding of their foundational principles, inherent advantages, potential challenges, and diverse applications. In addition to discussing the benefits of transformers, we also address their limitations, such as computational overhead, interpretability concerns, and the data-intensive nature of these models, providing a well-rounded analysis. Furthermore, the paper sheds light on the myriad of BCI applications that have benefited from the incorporation of transformers. These applications span from motor imagery decoding, emotion recognition, and sleep stage analysis to novel ventures such as speech reconstruction. This review serves as a holistic guide for researchers and practitioners, offering a panoramic view of the transformative potential of transformers in the BCI landscape. With the inclusion of examples and references, readers will gain a deeper understanding of the topic and its significance in the field.
@ARTICLE{Abibullaev_Review_2023,
author={Abibullaev, Berdakh and Keutayeva, Aigerim and Zollanvari, Amin},
journal={IEEE Access},
title={Deep Learning in EEG-Based BCIs: A Comprehensive Review of Transformer Models, Advantages, Challenges, and Applications},
year={2023},
volume={11},
number={},
pages={127271-127301},
doi={10.1109/ACCESS.2023.3329678}}
This comprehensive survey delves into the application of transformers in BCIs, providing readers with a lucid understanding of their foundational principles, inherent advantages, potential challenges, and diverse applications.
This study explores the use of attention mechanism-based deep learning models to construct subject-independent motor-imagery based brain-computer interfaces (MI-BCIs), which present unique and intricate challenges from a machine learning perspective. By comparing four attention mechanism-based models and employing nested LOSO methods for robust model selection, the study enhances the reliability of performance estimates and offers unique insights into the application of attention mechanisms in building subject-independent BCIs. The results indicate the potential of the Spatio-Temporal CNN + ViT model for practical BCI applications, as it outperforms other models on several datasets. Additionally, the study presents a realistic approach to building subject-independent BCIs by combining attention mechanisms and deep learning models to identify informative features common across subjects while filtering out noise and irrelevant data. While there are limitations and areas for future work to enhance the potential of these models, transformer-based models could become even more valuable in the BCI research field, leading to more robust and accurate subject-independent BCIs for various applications. The need for subject-independent MI-BCIs is amplified due to their potential in assisting individuals with severe neurological conditions, such as ALS and locked-in syndrome, which severely limit mobility and communication.
@ARTICLE{Keutayeva_stCNN_ViT_2023,
author={Keutayeva, Aigerim and Abibullaev, Berdakh},
journal={IEEE Access},
title={Exploring the Potential of Attention Mechanism-Based Deep Learning for Robust Subject-Independent Motor-Imagery Based BCIs},
year={2023},
volume={11},
number={},
pages={107562-107580},
doi={10.1109/ACCESS.2023.3320561}}
This study explores the use of attention mechanism-based deep learning models to construct subject-independent motor-imagery based brain-computer interfaces (MI-BCIs), which present unique and intricate challenges from a machine learning perspective.
Additive manufacturing is a promising manufacturing process with diverse applications, but ensuring the quality and reliability of the manufactured products are key challenges. The digital twin has emerged as a technology solution to address these challenge, allowing real-time monitoring and control of the manufacturing process. This paper proposes a digital twin system framework for additive manufacturing that integrates machine learning models, employing Unity, OctoPrint, and Raspberry Pi for real-time control and monitoring. Particularly, the system utilizes machine learning models for defect detection, achieving an Average Precision (AP) score of 92%, with specific performance metrics of 91% for defected objects and 94% for non-defected objects, demonstrating high efficiency. The Unity client user interface is also developed for control and visualization, facilitating easy additive manufacturing process monitoring. This research article presents a detailed description of the proposed digital twin framework and its workflow for implementation, the machine learning models, and the Unity client user interface. It also demonstrates the effectiveness of the integrated system through case studies and experimental results. The main findings show that the proposed digital twin system met its functional requirements and effectively detects defects and provides real-time control and monitoring of the additive manufacturing process. This paper contributes to the growing field of digital twin technology and additive manufacturing, providing a promising solution for enhancing the quality and reliability in the field of additive manufacturing.
@ARTICLE{Jyeniskhan_Digital_Twin_2023,
author={Jyeniskhan, Nursultan and Keutayeva, Aigerim and Kazbek, Gani and Ali, Md Hazrat and Shehab, Essam},
journal={IEEE Access},
title={Integrating Machine Learning Model and Digital Twin System for Additive Manufacturing},
year={2023},
volume={11},
number={},
pages={71113-71126},
doi={10.1109/ACCESS.2023.3294486}}
This paper proposes a digital twin system framework for additive manufacturing that integrates machine learning models, employing Unity, OctoPrint, and Raspberry Pi for real-time control and monitoring.
Brain-Computer Interfaces can revolutionize human-computer interaction by enabling users to engage with technology through cognitive processes. BCIs exhibit extensive prospective applications, comprising the reinstatement of mobility and communication skills in disabled individuals, augmentation of human performance across diverse domains, and provision of novel instruments for scientific exploration. However, one of the significant challenges in developing BCIs is ensuring that they work with different people, regardless of their differences in cognitive abilities, language backgrounds, ages, and physical conditions.
This thesis investigates robust subject-independent BCIs using attention mechanismbased deep learning models. The ability to create subject-independent BCIs is crucial for their practical use, as it can reduce the time and cost associated with individual calibration for each user. Additionally, robust subject-independent BCIs can help to improve accessibility for people with severe illnesses, such as amyotrophic lateral sclerosis (ALS), locked-in syndrome, and other conditions that limit mobility and communication abilities.
This study uses attention mechanism-based deep learning models to identify the most informative features that are common across all subjects while filtering out noise and irrelevant information. We use two different types of BCI datasets, one based on Event-Related Potentials (ERPs) and the other based on Motor Imagery (MI), to evaluate the performance of our chosen approach. The results show that the attention mechanism-based deep learning models can achieve high levels of accuracy and robustness across different subjects and have the potential to improve the usability of BCIs in various applications.
@ARTICLE{Keutayeva_thesis_2023,
author={Keutayeva, Aigerim},
journal={School of Engineering and Digital Sciences},
title={Robust subject-independent BCIs using Attention Mechanism based Deep Learning models},
year={2023},
volume={},
number={},
pages={},
url={http://nur.nu.edu.kz/handle/123456789/7135}}
This thesis investigates robust subject-independent BCIs using attention mechanismbased deep learning models on 4 ERP- and 4 MI-based BCI datasets. Access by request only.
This research examines the employment of attention mechanism driven deep learning models for building subject-independent Brain-Computer Interfaces (BCIs). The research evaluated three different attention models using the Leave-One-Subject-Out cross-validation method. The results showed that the Hybrid Temporal CNN and ViT model performed well on the BCI competition IV 2a dataset, achieving the highest average accuracy and outperforming other models for 5 out of 9 subjects. However, this model did not perform the best on the BCI competition IV 2b dataset when compared to other methods. One of the challenges faced was the limited size of the data, especially for transformer models that require large amounts of data, which affected the performance variability between datasets. This study highlights a beneficial approach to designing BCIs, combining attention mechanisms with deep learning to extract important inter-subject features from EEG data while filtering out irrelevant signals.
@inproceedings{keutayeva2023subject,
title={Subject-Independent Brain-Computer Interfaces: A Comparative Study of Attention Mechanism-Driven Deep Learning Models},
author={Keutayeva, Aigerim and Abibullaev, Berdakh},
booktitle={International Conference on Intelligent Human Computer Interaction},
pages={245--254},
year={2023},
organization={Springer}
}
This research examines the employment of attention mechanism driven deep learning models for building subject-independent Brain-Computer Interfaces (BCIs).
This chapter explores the transformative impact of transformer models on EEG-based motor imagery brain-computer interfaces (BCIs)---systems that are pushing the boundaries of human-machine interaction. Transformers, renowned for their self-attention mechanisms, excel at handling sequential data, making them uniquely suited for decoding intricate EEG patterns. We offer a comprehensive review of transformer applications in BCIs, showcasing how they significantly improve signal interpretation accuracy, efficiency, and robustness. The chapter examines the technical foundations, including the inherent complexities of EEG signals---noise, non-stationarity, and intersubject variability---and how transformers tackle them through superior feature extraction and denoising capabilities. We trace the evolution of these models from traditional machine-learning approaches to sophisticated architectures that capture both temporal and spatial dependencies in EEG data. The chapter then delves into practical applications of these models in real-world BCI systems, discussing how they translate into tangible benefits for users. We explore prospects and ongoing research aimed at overcoming limitations like computational demands and the need for personalized models. By analyzing emerging trends and envisioning future directions, this chapter provides a roadmap for the BCI research community, ultimately leading to more intuitive, versatile, and effective human-computer interactions.
@Inbook{Keutayeva2024,
author="Keutayeva, Aigerim and Zollanvari, Amin and Abibullaev, Berdakh",
editor="Vinjamuri, Ramana",
title="Evolving Trends and Future Prospects of Transformer Models in EEG-Based Motor-Imagery BCI Systems",
bookTitle="Discovering the Frontiers of Human-Robot Interaction: Insights and Innovations in Collaboration, Communication, and Control",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="233--256",
isbn="978-3-031-66656-8",
doi="10.1007/978-3-031-66656-8_10",
url="https://doi.org/10.1007/978-3-031-66656-8_10"}
This chapter explores the evolving trends and future potential of Transformer models within EEG-based MI BCIs.
Designed and implemented an Event-Related Potential-based Brain-Computer Interface classifier
using an ensemble model with Linear Discriminant Analysis, Support Vector Classifier, and kNearest Neighbor.
Semi-Supervised Multispectral Scene Classification model with Few Labels using
MsMatch, EfficientNet Pytorch, and data augmentations, such as Imagio and Albumentations.
Real-time child-centered action recognition using 2D Skeleton joints with 24 OpenPose body key points with Deep Neural Networks, Recurrent Neural Networks, and Long Short-Term Memory.