• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Feature fusion" 14 results
        • A novel approach for assessing quality of electrocardiogram signal by integrating multi-scale temporal features

          During long-term electrocardiogram (ECG) monitoring, various types of noise inevitably become mixed with the signal, potentially hindering doctors' ability to accurately assess and interpret patient data. Therefore, evaluating the quality of ECG signals before conducting analysis and diagnosis is crucial. This paper addresses the limitations of existing ECG signal quality assessment methods, particularly their insufficient focus on the 12-lead multi-scale correlation. We propose a novel ECG signal quality assessment method that integrates a convolutional neural network (CNN) with a squeeze and excitation residual network (SE-ResNet). This approach not only captures both local and global features of ECG time series but also emphasizes the spatial correlation among ECG signals. Testing on a public dataset demonstrated that our method achieved an accuracy of 99.5%, sensitivity of 98.5%, and specificity of 99.6%. Compared with other methods, our technique significantly enhances the accuracy of ECG signal quality assessment by leveraging inter-lead correlation information, which is expected to advance the development of intelligent ECG monitoring and diagnostic technology.

          Release date:2024-12-27 03:50 Export PDF Favorites Scan
        • Research on arrhythmia classification algorithm based on adaptive multi-feature fusion network

          Deep learning method can be used to automatically analyze electrocardiogram (ECG) data and rapidly implement arrhythmia classification, which provides significant clinical value for the early screening of arrhythmias. How to select arrhythmia features effectively under limited abnormal sample supervision is an urgent issue to address. This paper proposed an arrhythmia classification algorithm based on an adaptive multi-feature fusion network. The algorithm extracted RR interval features from ECG signals, employed one-dimensional convolutional neural network (1D-CNN) to extract time-domain deep features, employed Mel frequency cepstral coefficients (MFCC) and two-dimensional convolutional neural network (2D-CNN) to extract frequency-domain deep features. The features were fused using adaptive weighting strategy for arrhythmia classification. The paper used the arrhythmia database jointly developed by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) and evaluated the algorithm under the inter-patient paradigm. Experimental results demonstrated that the proposed algorithm achieved an average precision of 75.2%, an average recall of 70.1% and an average F1-score of 71.3%, demonstrating high classification accuracy and being able to provide algorithmic support for arrhythmia classification in wearable devices.

          Release date:2025-02-21 03:20 Export PDF Favorites Scan
        • Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis

          Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.

          Release date:2023-10-20 04:48 Export PDF Favorites Scan
        • Research on emotion recognition methods based on multi-modal physiological signal feature fusion

          Emotion classification and recognition is a crucial area in emotional computing. Physiological signals, such as electroencephalogram (EEG), provide an accurate reflection of emotions and are difficult to disguise. However, emotion recognition still faces challenges in single-modal signal feature extraction and multi-modal signal integration. This study collected EEG, electromyogram (EMG), and electrodermal activity (EDA) signals from participants under three emotional states: happiness, sadness, and fear. A feature-weighted fusion method was applied for integrating the signals, and both support vector machine (SVM) and extreme learning machine (ELM) were used for classification. The results showed that the classification accuracy was highest when the fusion weights were set to EEG 0.7, EMG 0.15, and EDA 0.15, achieving accuracy rates of 80.19% and 82.48% for SVM and ELM, respectively. These rates represented an improvement of 5.81% and 2.95% compared to using EEG alone. This study offers methodological support for emotion classification and recognition using multi-modal physiological signals.

          Release date: Export PDF Favorites Scan
        • Early Alzheimer’s disease recognition via multimodal hand movement quality assessment

          Alzheimer’s disease (AD) is a common elderly illness, and the hand movement abilities of patients differ from those of normal individuals. Focusing on the utilization of RGB, optical flow, and hand skeleton as tri-modal image information for early AD recognition, a method for early AD recognition via multi-modal hand motion quality assessment (EADR) is proposed. First, a hybrid modality feature encoder incorporating global contextual information was designed to integrate the global contextual information of features from three specific modality branches. Subsequently, a fusion modality feature decoder network incorporating specific modality features was proposed to decode the overlooked information in the fusion modality branch from specific modality features, thereby enhancing feature fusion. Experiments demonstrated that EADR effectively could capture high-quality hand motion features and excelled in hand motion quality assessment tasks, outperforming existing models. Based on this, the action quality scoring regression model trained using the k-nearest neighbors algorithm demonstrated the best recognition performance for AD patients, with Spearman’s rank correlation coefficient and Kendall’s rank correlation coefficient reaching 90.98% and 83.44%, respectively. This indicates that the assessment of hand motor ability may serve as a potential auxiliary tool for early AD identification.

          Release date:2026-02-06 02:05 Export PDF Favorites Scan
        • Research on prediction model of protein thermostability integrating graph embedding and network topology features

          Protein structure determines function, and structural information is critical for predicting protein thermostability. This study proposes a novel method for protein thermostability prediction by integrating graph embedding features and network topological features. By constructing residue interaction networks (RINs) to characterize protein structures, we calculated network topological features and utilize deep neural networks (DNN) to mine inherent characteristics. Using DeepWalk and Node2vec algorithms, we obtained node embeddings and extracted graph embedding features through a TopN strategy combined with bidirectional long short-term memory (BiLSTM) networks. Additionally, we introduced the Doc2vec algorithm to replace the Word2vec module in graph embedding algorithms, generating graph embedding feature vector encodings. By employing an attention mechanism to fuse graph embedding features with network topological features, we constructed a high-precision prediction model, achieving 87.85% prediction accuracy on a bacterial protein dataset. Furthermore, we analyzed the differences in the contributions of network topological features in the model and the differences among various graph embedding methods, and found that the combination of DeepWalk features with Doc2vec and all topological features was crucial for the identification of thermostable proteins. This study provides a practical and effective new method for protein thermostability prediction, and at the same time offers theoretical guidance for exploring protein diversity, discovering new thermostable proteins, and the intelligent modification of mesophilic proteins.

          Release date:2025-08-19 11:47 Export PDF Favorites Scan
        • Discrimination of macrotrabecular-massive hepatocellular carcinoma based on fusion of multi-phase contrast-enhanced computed tomography radiomics features

          The macrotrabecular-massive (MTM) subtype of hepatocellular carcinoma (HCC) is a histological variant with higher malignant potential. Non-invasive preoperative identification of MTM-HCC is crucial for precise treatment. Current radiomics-based diagnostic models often integrate multi-phase features by simple feature concatenation, which may inadequately explore the latent complementary information between phases. This study proposes a feature fusion-based radiomics model using multi-phase contrast-enhanced computed tomography (mpCECT) images. Features were extracted from the arterial phase (AP), portal venous phase (PVP), and delayed phase (DP) CT images of 121 HCC patients. The fusion model was constructed and compared against the traditional concatenation model. Five-fold cross-validation demonstrated that the feature fusion model combining AP and PVP features achieved the best classification performance, with an area under the receiver operating characteristic curve (AUC) of 0.839. Furthermore, for any combination of two phases, the feature fusion model consistently outperformed the traditional feature concatenation approach. In conclusion, the proposed feature fusion model effectively enhances the discrimination capability compared to traditional models, providing a new tool for clinical practice.

          Release date:2025-12-22 10:16 Export PDF Favorites Scan
        • Recurrence prediction of gastric cancer based on multi-resolution feature fusion and context information

          Pathological images of gastric cancer serve as the gold standard for diagnosing this malignancy. However, the recurrence prediction task often encounters challenges such as insignificant morphological features of the lesions, insufficient fusion of multi-resolution features, and inability to leverage contextual information effectively. To address these issues, a three-stage recurrence prediction method based on pathological images of gastric cancer is proposed. In the first stage, the self-supervised learning framework SimCLR was adopted to train low-resolution patch images, aiming to diminish the interdependence among diverse tissue images and yield decoupled enhanced features. In the second stage, the obtained low-resolution enhanced features were fused with the corresponding high-resolution unenhanced features to achieve feature complementation across multiple resolutions. In the third stage, to address the position encoding difficulty caused by the large difference in the number of patch images, we performed position encoding based on multi-scale local neighborhoods and employed self-attention mechanism to obtain features with contextual information. The resulting contextual features were further combined with the local features extracted by the convolutional neural network. The evaluation results on clinically collected data showed that, compared with the best performance of traditional methods, the proposed network provided the best accuracy and area under curve (AUC), which were improved by 7.63% and 4.51%, respectively. These results have effectively validated the usefulness of this method in predicting gastric cancer recurrence.

          Release date:2024-10-22 02:39 Export PDF Favorites Scan
        • Research on bimodal emotion recognition algorithm based on multi-branch bidirectional multi-scale time perception

          Emotion can reflect the psychological and physiological health of human beings, and the main expression of human emotion is voice and facial expression. How to extract and effectively integrate the two modes of emotion information is one of the main challenges faced by emotion recognition. In this paper, a multi-branch bidirectional multi-scale time perception model is proposed, which can detect the forward and reverse speech Mel-frequency spectrum coefficients in the time dimension. At the same time, the model uses causal convolution to obtain temporal correlation information between different scale features, and assigns attention maps to them according to the information, so as to obtain multi-scale fusion of speech emotion features. Secondly, this paper proposes a two-modal feature dynamic fusion algorithm, which combines the advantages of AlexNet and uses overlapping maximum pooling layers to obtain richer fusion features from different modal feature mosaic matrices. Experimental results show that the accuracy of the multi-branch bidirectional multi-scale time sensing dual-modal emotion recognition model proposed in this paper reaches 97.67% and 90.14% respectively on the two public audio and video emotion data sets, which is superior to other common methods, indicating that the proposed emotion recognition model can effectively capture emotion feature information and improve the accuracy of emotion recognition.

          Release date:2025-06-23 04:09 Export PDF Favorites Scan
        • A motor imagery decoding study integrating differential attention with a multi-scale adaptive temporal convolutional network

          Motor imagery electroencephalogram (MI-EEG) decoding algorithms face multiple challenges. These include incomplete feature extraction, susceptibility of attention mechanisms to distraction under low signal-to-noise ratios, and limited capture of long-range temporal dependencies. To address these issues, this paper proposes a multi-branch differential attention temporal network (MDAT-Net). First, the method constructed a multi-branch feature fusion module to extract and fuse diverse spatio-temporal features from different scales. Next, to suppress noise and stabilize attention, a novel multi-head differential attention mechanism was introduced to enhance key signal dynamics by calculating the difference between attention maps. Finally, an adaptive residual separable temporal convolutional network was designed to efficiently capture long-range dependencies within the feature sequence for precise classification. Experimental results showed that the proposed method achieved average classification accuracies of 85.73%, 90.04%, and 96.30% on the public datasets BCI-IV-2a, BCI-IV-2b, and HGD, respectively, significantly outperforming several baseline models. This research provides an effective new solution for developing high-precision motor imagery brain-computer interface systems.

          Release date:2025-12-22 10:16 Export PDF Favorites Scan
        2 pages Previous 1 2 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜