High-frequency steady-state asymmetric visual evoked potential (SSaVEP) provides a new paradigm for designing comfortable and practical brain-computer interface (BCI) systems. However, due to the weak amplitude and strong noise of high-frequency signals, it is of great significance to study how to enhance their signal features. In this study, a 30 Hz high-frequency visual stimulus was used, and the peripheral visual field was equally divided into eight annular sectors. Eight kinds of annular sector pairs were selected based on the mapping relationship of visual space onto the primary visual cortex (V1), and three phases (in-phase[0o, 0o], anti-phase [0o, 180o], and anti-phase [180o, 0o]) were designed for each annular sector pair to explore response intensity and signal-to-noise ratio under phase modulation. A total of 8 healthy subjects were recruited in the experiment. The results showed that three annular sector pairs exhibited significant differences in SSaVEP features under phase modulation at 30 Hz high-frequency stimulation. And the spatial feature analysis showed that the two types of features of the annular sector pair in the lower visual field were significantly higher than those in the upper visual field. This study further used the filter bank and ensemble task-related component analysis to calculate the classification accuracy of annular sector pairs under three-phase modulations, and the average accuracy was up to 91.5%, which proved that the phase-modulated SSaVEP features could be used to encode high- frequency SSaVEP. In summary, the results of this study provide new ideas for enhancing the features of high-frequency SSaVEP signals and expanding the instruction set of the traditional steady state visual evoked potential paradigm.
Neurofeedback (NF) technology based on electroencephalogram (EEG) data or functional magnetic resonance imaging (fMRI) has been widely studied and applied. In contrast, functional near infrared spectroscopy (fNIRS) has become a new technique in NF research in recent years. fNIRS is a neuroimaging technology based on hemodynamics, which has the advantages of low cost, good portability and high spatial resolution, and is more suitable for use in natural environments. At present, there is a lack of comprehensive review on fNIRS-NF technology (fNIRS-NF) in China. In order to provide a reference for the research of fNIRS-NF technology, this paper first describes the principle, key technologies and applications of fNIRS-NF, and focuses on the application of fNIRS-NF. Finally, the future development trend of fNIRS-NF is prospected and summarized. In conclusion, this paper summarizes fNIRS-NF technology and its application, and concludes that fNIRS-NF technology has potential practicability in neurological diseases and related fields. fNIRS can be used as a good method for NF training. This paper is expected to provide reference information for the development of fNIRS-NF technology.
Control at beyond-visual ranges is of great significance to animal-robots with wide range motion capability. For pigeon-robots, such control can be done by the way of onboard preprogram, but not constitute a closed-loop yet. This study designed a new control system for pigeon-robots, which integrated the function of trajectory monitoring to that of brain stimulation. It achieved the closed-loop control in turning or circling by estimating pigeons’ flight state instantaneously and the corresponding logical regulation. The stimulation targets located at the formation reticularis medialis mesencephali (FRM) in the left and right brain, for the purposes of left- and right-turn control, respectively. The stimulus was characterized by the waveform mimicking the nerve cell membrane potential, and was activated intermittently. The wearable control unit weighted 11.8 g totally. The results showed a 90% success rate by the closed-loop control in pigeon-robots. It was convenient to obtain the wing shape during flight maneuver, by equipping a pigeon-robot with a vivo camera. It was also feasible to regulate the evolution of pigeon flocks by the pigeon-robots at different hierarchical level. All of these lay the groundwork for the application of pigeon-robots in scientific researches.
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.
Stroke is an acute cerebrovascular disease in which sudden interruption of blood supply to the brain or rupture of cerebral blood vessels cause damage to brain cells and consequently impair the patient's motor and cognitive abilities. A novel rehabilitation training model integrating brain-computer interface (BCI) and virtual reality (VR) not only promotes the functional activation of brain networks, but also provides immersive and interesting contextual feedback for patients. In this paper, we designed a hand rehabilitation training system integrating multi-sensory stimulation feedback, BCI and VR, which guides patients' motor imaginations through the tasks of the virtual scene, acquires patients' motor intentions, and then carries out human-computer interactions under the virtual scene. At the same time, haptic feedback is incorporated to further increase the patients' proprioceptive sensations, so as to realize the hand function rehabilitation training based on the multi-sensory stimulation feedback of vision, hearing, and haptic senses. In this study, we compared and analyzed the differences in power spectral density of different frequency bands within the EEG signal data before and after the incorporation of haptic feedback, and found that the motor brain area was significantly activated after the incorporation of haptic feedback, and the power spectral density of the motor brain area was significantly increased in the high gamma frequency band. The results of this study indicate that the rehabilitation training of patients with the VR-BCI hand function enhancement rehabilitation system incorporating multi-sensory stimulation can accelerate the two-way facilitation of sensory and motor conduction pathways, thus accelerating the rehabilitation process.
Brain-computer interface (BCI) is a revolutionizing technology that disrupts traditional human-computer interaction by establishing direct communication and control between the brain and computer, bypassing the peripheral nervous and muscular systems. With the rapid advancement of BCI technology, growing application demands, and an increasing need for specialized BCI professionals, a new academic major—BCI major—has gradually emerged. However, few studies to date have discussed the interdisciplinary nature and training framework of this emerging major. To address this gap, this paper first introduced the application demands of BCI, including the demand for BCI technology in both medical and non-medical fields. The paper also described the interdisciplinary nature of the BCI major and the urgent need for specialized professionals in this field. Subsequently, a training program of the BCI major was presented, with careful consideration of the multidisciplinary nature of BCI research and development, along with recommendations for curriculum structure and credit distribution. Additionally, the facing challenges of the construction of the BCI major were analyzed, and suggested strategies for addressing these challenges were offered. Finally, the future of the BCI major was envisioned. It is hoped that this paper will provide valuable reference for the development and construction of the BCI major.
Patients with amyotrophic lateral sclerosis ( ALS ) often have difficulty in expressing their intentions through language and behavior, which prevents them from communicating properly with the outside world and seriously affects their quality of life. The brain-computer interface (BCI) has received much attention as an aid for ALS patients to communicate with the outside world, but the heavy device causes inconvenience to patients in the application process. To improve the portability of the BCI system, this paper proposed a wearable P300-speller brain-computer interface system based on the augmented reality (MR-BCI). This system used Hololens2 augmented reality device to present the paradigm, an OpenBCI device to capture EEG signals, and Jetson Nano embedded computer to process the data. Meanwhile, to optimize the system’s performance for character recognition, this paper proposed a convolutional neural network classification method with low computational complexity applied to the embedded system for real-time classification. The results showed that compared with the P300-speller brain-computer interface system based on the computer screen (CS-BCI), MR-BCI induced an increase in the amplitude of the P300 component, an increase in accuracy of 1.7% and 1.4% in offline and online experiments, respectively, and an increase in the information transfer rate of 0.7 bit/min. The MR-BCI proposed in this paper achieves a wearable BCI system based on guaranteed system performance. It has a positive effect on the realization of the clinical application of BCI.
In the field of brain-computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS), traditional subject-specific decoding methods suffer from the limitations of long calibration time and low cross-subject generalizability, which restricts the promotion and application of BCI systems in daily life and clinic. To address the above dilemma, this study proposes a novel deep transfer learning approach that combines the revised inception-residual network (rIRN) model and the model-based transfer learning (TL) strategy, referred to as TL-rIRN. This study performed cross-subject recognition experiments on mental arithmetic (MA) and mental singing (MS) tasks to validate the effectiveness and superiority of the TL-rIRN approach. The results show that the TL-rIRN significantly shortens the calibration time, reduces the training time of the target model and the consumption of computational resources, and dramatically enhances the cross-subject decoding performance compared to subject-specific decoding methods and other deep transfer learning methods. To sum up, this study provides a basis for the selection of cross-subject, cross-task, and real-time decoding algorithms for fNIRS-BCI systems, which has potential applications in constructing a convenient and universal BCI system.
Rapid serial visual presentation-brain computer interface (RSVP-BCI) is the most popular technology in the early discover task based on human brain. This algorithm can obtain the rapid perception of the environment by human brain. Decoding brain state based on single-trial of multichannel electroencephalogram (EEG) recording remains a challenge due to the low signal-to-noise ratio (SNR) and nonstationary. To solve the problem of low classification accuracy of single-trial in RSVP-BCI, this paper presents a new feature extraction algorithm which uses principal component analysis (PCA) and common spatial pattern (CSP) algorithm separately in spatial domain and time domain, creating a spatial-temporal hybrid CSP-PCA (STHCP) algorithm. By maximizing the discrimination distance between target and non-target, the feature dimensionality was reduced effectively. The area under the curve (AUC) of STHCP algorithm is higher than that of the three benchmark algorithms (SWFP, CSP and PCA) by 17.9%, 22.2% and 29.2%, respectively. STHCP algorithm provides a new method for target detection.
With the development of brain-computer interface (BCI) technology and its translational application in clinical medicine, BCI medicine has emerged, ushering in profound changes to the practice of medicine, while also bringing forth a series of ethical issues related to BCI medicine. BCI medicine is progressively emerging as a new disciplinary focus, yet to date, there has been limited literature discussing it. Therefore, this paper focuses on BCI medicine, firstly providing an overview of the main potential medical applications of BCI technology. It then defines the discipline, outlines its objectives, methodologies, potential efficacy, and associated translational medical research. Additionally, it discusses the ethics associated with BCI medicine, and introduces the standardized operational procedures for BCI medical applications and the methods for evaluating the efficacy of BCI medical applications. Finally, it anticipates the challenges and future directions of BCI medicine. In the future, BCI medicine may become a new academic discipline or major in higher education. In summary, this article is hoped to provide thoughts and references for the development of the discipline of BCI medicine.