This study aims to explore the differences of event related potential (ERP) between attention deficit hyperactivity disorder (ADHD) and normal children, so that these differences provide scientific basis for the diagnosis of ADHD. Eight children were identified to be ADHD group by the diagnostic criteria of DSM IV (diagnostic and statistical manual of mental disorders IV), and the control group also consisted of 8 normal children. Modified visual continuous performance test (CPT) was used as the experiment paradigm. The experiment included two major conditions, i.e. Go and NoGo. All the 16 subjects participated in the study. A high density EEG acquisition instrument was used to record the EEG signal and processed these EEG data by means of ERP and spectrum analysis. P2 N2 peak peak value and spectral peak around 11 Hz were analyzed between ADHD subjects and those in the control group, and then statistical tests were applied to these two groups. Results showed that: ① Under the condition of Go, ADHD group had a significant lower P2 N2 peak peak value than the values in the control group ( P< 0.05); but under the condition of NoGo there was no significant difference in between. ② Compared with the control group, the ADHD group had significant lower spectral amplitude around 11 Hz under the condition of NoGo ( P< 0.05). However, under the condition of Go the difference was insignificant. In conclusion, there is certain cognitive dysfunction in ADHD children. P2-N2 peak-peak value and spectral peak around 11 Hz could be considered as clinical evaluation indexes of ADHD children′s cognitive function. These two objective indexes provide an early diagnosis and effective treatment of ADHD .
Objective To understand the cognition and acceptance of community hemodialysis centers among hemodialysis patients in Yangzhou, and to provide theoretical basis for the development of community hemodialysis centers. Methods A cluster random sampling method was used to select 400 maintenance hemodialysis patients treated in various areas of Yangzhou in April 2021 for a questionnaire survey to analyze the influencing factors of patients’ medical treatment behavior. Results A total of 390 valid questionnaires were recovered, with an effective recovery rate of 97.50%. Among the patients, 40.51% were very concerned about the construction of hemodialysis centers in the community, 56.67% understood the relevant policies, and 56.92% of the patients were willing to choose the community for dialysis treatment. The results of logistic regression analysis showed that the main factors affecting whether patients choose community for hemodialysis treatment include the patients’ residence [Jiangdu vs. Guangling: odds ratio (OR)=7.183, 95% confidence interval (CI) (2.010, 25.674), P=0.002; Gaoyou vs. Guangling: OR=22.512, 95%CI (7.201, 70.373), P<0.001; Yizheng vs. Guangling: OR=25.137, 95%CI (7.636, 82.744), P<0.001; Baoying vs. Guangling: OR=23.784, 95%CI (7.795, 72.569), P<0.001], degree of concern [some concern vs. very concerned: OR=0.267, 95 %CI (0.137, 0.521), P<0.001; not very concerned vs. very concerned: OR=0.062, 95%CI (0.023, 0.168), P<0.001; not concerned vs. very concerned: OR=0.101, 95% CI (0.023, 0.439), P=0.002], awareness [somewhat know vs. know very well: OR=0.025, 95%CI (0.002, 0.318), P=0.004; don’t know very well vs. know very well: OR=0.035, 95%CI (0.003, 0.439), P=0.009; don’t know vs. know very well: OR=0.006, 95%CI (0.000, 0.084), P<0.001]. Conclusions Hemodialysis patients in Yangzhou have a low level of awareness and acceptance of community-based hemodialysis centers. The patients’ residence, degree of attention and awareness of community-based hemodialysis center directly affect whether they choose the community for treatment. The relevant departments and medical institutions can start from the factors that affect patients’ choice of medical treatment, further strengthen the publicity of community dialysis, optimize the allocation of medical resources, and improve the capacity of community health services.
Skin cancer is a significant public health issue, and computer-aided diagnosis technology can effectively alleviate this burden. Accurate identification of skin lesion types is crucial when employing computer-aided diagnosis. This study proposes a multi-level attention cascaded fusion model based on Swin-T and ConvNeXt. It employed hierarchical Swin-T and ConvNeXt to extract global and local features, respectively, and introduced residual channel attention and spatial attention modules for further feature extraction. Multi-level attention mechanisms were utilized to process multi-scale global and local features. To address the problem of shallow features being lost due to their distance from the classifier, a hierarchical inverted residual fusion module was proposed to dynamically adjust the extracted feature information. Balanced sampling strategies and focal loss were employed to tackle the issue of imbalanced categories of skin lesions. Experimental testing on the ISIC2018 and ISIC2019 datasets yielded accuracy, precision, recall, and F1-Score of 96.01%, 93.67%, 92.65%, and 93.11%, respectively, and 92.79%, 91.52%, 88.90%, and 90.15%, respectively. Compared to Swin-T, the proposed method achieved an accuracy improvement of 3.60% and 1.66%, and compared to ConvNeXt, it achieved an accuracy improvement of 2.87% and 3.45%. The experiments demonstrate that the proposed method accurately classifies skin lesion images, providing a new solution for skin cancer diagnosis.
Due to the high dimensionality and complexity of the data, the analysis of spatial transcriptome data has been a challenging problem. Meanwhile, cluster analysis is the core issue of the analysis of spatial transcriptome data. In this article, a deep learning approach is proposed based on graph attention networks for clustering analysis of spatial transcriptome data. Our method first enhances the spatial transcriptome data, then uses graph attention networks to extract features from nodes, and finally uses the Leiden algorithm for clustering analysis. Compared with the traditional non-spatial and spatial clustering methods, our method has better performance in data analysis through the clustering evaluation index. The experimental results show that the proposed method can effectively cluster spatial transcriptome data and identify different spatial domains, which provides a new tool for studying spatial transcriptome data.
Aiming at the difference between the brain networks of children with attention deficit hyperactivity disorder (ADHD) and normal children in the task-executing state, this paper conducted a comparative study using the network features of the visual function area. Functional magnetic resonance imaging (fMRI) data of 23 children with ADHD [age: (8.27 ± 2.77) years] and 23 normal children [age: (8.70 ± 2.58) years] were obtained by the visual capture paradigm when the subjects were performing the guessing task. First, fMRI data were used to build a visual area brain function network. Then, the visual area brain function network characteristic indicators including degree distribution, average shortest path, network density, aggregation coefficient, intermediary, etc. were obtained and compared with the traditional whole brain network. Finally, support vector machines (SVM) and other classifiers in the machine learning algorithm were used to classify the feature indicators to distinguish ADHD children from normal children. In this study, visual brain function network features were used for classification, with a classification accuracy of up to 96%. Compared with the traditional method of constructing a whole brain network, the accuracy was improved by about 10%. The test results show that the use of visual area brain function network analysis can better distinguish ADHD children from normal children. This method has certain help to distinguish the brain network between ADHD children and normal children, and is helpful for the auxiliary diagnosis of ADHD children.
Joint attention deficit is one of the core disorders in children with autism, which seriously affects the development of multiple basic skills such as language and communication. Virtual reality scene intervention has great potential in improving joint attention skills in children with autism due to its good interactivity and immersion. This article reviewed the application of virtual reality based social and nonsocial scenarios in training joint attention skills for children with autism in recent years, summarized the problems and challenges of this intervention method, and proposed a new joint paradigm for social scenario assessment and nonsocial scenario training. Finally, it looked forward to the future development and application prospects of virtual reality technology in joint attention skill training for children with autism.
Fatigue driving is one of the leading causes of traffic accidents, posing a significant threat to drivers and road safety. Most existing methods focus on studying whole-brain multi-channel electroencephalogram (EEG) signals, which involve a large number of channels, complex data processing, and cumbersome wearable devices. To address this issue, this paper proposes a fatigue detection method based on frontal EEG signals and constructs a fatigue driving detection model using an asymptotic hierarchical fusion network. The model employed a hierarchical fusion strategy, integrating an attention mechanism module into the multi-level convolutional module. By utilizing both cross-attention and self-attention mechanisms, it effectively fused the hierarchical semantic features of power spectral density (PSD) and differential entropy (DE), enhancing the learning of feature dependencies and interactions. Experimental validation was conducted on the public SEED-VIG dataset. The proposed model achieved an accuracy of 89.80% using only four frontal EEG channels. Comparative experiments with existing methods demonstrate that the proposed model achieves high accuracy and superior practicality, providing valuable technical support for fatigue driving monitoring and prevention.
Objective To automatically segment diabetic retinal exudation features from deep learning color fundus images. Methods An applied study. The method of this study is based on the U-shaped network model of the Indian Diabetic Retinopathy Image Dataset (IDRID) dataset, introduces deep residual convolution into the encoding and decoding stages, which can effectively extract seepage depth features, solve overfitting and feature interference problems, and improve the model's feature expression ability and lightweight performance. In addition, by introducing an improved context extraction module, the model can capture a wider range of feature information, enhance the perception ability of retinal lesions, and perform excellently in capturing small details and blurred edges. Finally, the introduction of convolutional triple attention mechanism allows the model to automatically learn feature weights, focus on important features, and extract useful information from multiple scales. Accuracy, recall, Dice coefficient, accuracy and sensitivity were used to evaluate the ability of the model to detect and segment the automatic retinal exudation features of diabetic patients in color fundus images. Results After applying this method, the accuracy, recall, dice coefficient, accuracy and sensitivity of the improved model on the IDRID dataset reached 81.56%, 99.54%, 69.32%, 65.36% and 78.33%, respectively. Compared with the original model, the accuracy and Dice index of the improved model are increased by 2.35% , 3.35% respectively. Conclusion The segmentation method based on U-shaped network can automatically detect and segment the retinal exudation features of fundus images of diabetic patients, which is of great significance for assisting doctors to diagnose diseases more accurately.
Aiming at the problems of low accuracy and large difference of segmentation boundary distance in anterior cruciate ligament (ACL) image segmentation of knee joint, this paper proposes an ACL image segmentation model by fusing dilated convolution and residual hybrid attention U-shaped network (DRH-UNet). The proposed model builds upon the U-shaped network (U-Net) by incorporating dilated convolutions to expand the receptive field, enabling a better understanding of the contextual relationships within the image. Additionally, a residual hybrid attention block is designed in the skip connections to enhance the expression of critical features in key regions and reduce the semantic gap, thereby improving the representation capability for the ACL area. This study constructs an enhanced annotated ACL dataset based on the publicly available Magnetic Resonance Imaging Network (MRNet) dataset. The proposed method is validated on this dataset, and the experimental results demonstrate that the DRH-UNet model achieves a Dice similarity coefficient (DSC) of (88.01±1.57)% and a Hausdorff distance (HD) of 5.16±0.85, outperforming other ACL segmentation methods. The proposed approach further enhances the segmentation accuracy of ACL, providing valuable assistance for subsequent clinical diagnosis by physicians.
Evolutionary psychology holds such an opinion that negative situation may threaten survival, trigger avoidance motive and have poor effects on the human body function and the psychological quality. Both disgusted and sad situations can induce negative emotions. However, differences between the two situations on attention capture and emotion cognition during the emotion induction are still not well known. Typical disgusted and sad situation images were used in the present study to induce two negative emotions, and 15 young students (7 males and 8 females, aged 27±3) were recruited in the experiments. Electroencephalogram of 32 leads was recorded when the subjects were viewing situation images, and event-related potentials (ERP) of all leads were obtained for future analysis. Paired sample t tests were carried out on two ERP signals separately induced by disgusted and sad situation images to get time quantum with significant statistical differences between the two ERP signals. Root-mean-square deviations of two ERP signals during each time quantum were calculated and the brain topographic map based on root-mean-square deviations was drawn to display differences of two ERP signals in spatial. Results showed that differences of ERP signals induced by disgusted and sad situation images were mainly manifested in T1 (120-450 ms) early and T2 (800-1 000 ms) later. During the period of T1, the occipital lobe reflecting attention capture was activated by both disgusted and sad situation images, but the prefrontal cortex reflecting emotion sense was activated only by disgusted situation images. During the period of T2, the prefrontal cortex was activated by both disgusted and sad situation images. However, the parietal lobe was activated only by disgusted situation images, which showed stronger emotional perception. The research results would have enlightenment to deepen understanding of negative emotions and to explore deep cognitive neuroscience mechanisms of negative emotion induction.