• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Attention" 29 results
        • A joint distillation model for the tumor segmentation using breast ultrasound images

          The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, precision, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×106 vs. 106.1×106, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.

          Release date:2025-02-21 03:20 Export PDF Favorites Scan
        • A Meta-analyses Comparing Atomoxetine with Methylphenidate for Treatement of Children with Attention- Deficit/ Hyperactivity Disorder

          Objective To assess atomoxetine and methylphenidate therapy for attention- deficit/ hyperactivity disorder (ADHD) .Methods We electronically searched the Cochrane Library (Issue 2, 2008), PubMed (1970 to 2008), MEDLINE (1971 to 2008), EMbase (1971 to 2008), Medscape (1990 to 2008), CBM (1978 to 2008), and NRR (1950 to 2008). We also hand-searched some published and unpublished references. Two independent reviewers extracted data. Quality was assessed by the Cochrane Reviewer’s Handbook 4.0. Meta-analysis was conducted by The Cochrane Collaboration’s RevMan 4.2.8 software. Results We finally identified 3 randomized controlled trials that were relevant to the study. Treatment response (reducing ADHD-RS Inattention subscale score) was significantly greater for patients in the methylphenidate group than in the atomoxetine group with WMD= – 1.79 and 95%CI – 2.22 to 1.35 (Plt;0.000 01). There was no statistical difference in other outcome measures between two groups (Pgt;0.05). Conclusions The effectiveness and tolerance of methylphenidate and atomoxetine are similar in treatment of ADHD. Further large randomized, double blind, placebocontrolled trials with end-point outcome measures in long-term safety and efficacy are needed.

          Release date:2016-09-07 02:09 Export PDF Favorites Scan
        • Segmentation of ground glass pulmonary nodules using full convolution residual network based on atrous spatial pyramid pooling structure and attention mechanism

          Accurate segmentation of ground glass nodule (GGN) is important in clinical. But it is a tough work to segment the GGN, as the GGN in the computed tomography images show blur boundary, irregular shape, and uneven intensity. This paper aims to segment GGN by proposing a fully convolutional residual network, i.e., residual network based on atrous spatial pyramid pooling structure and attention mechanism (ResAANet). The network uses atrous spatial pyramid pooling (ASPP) structure to expand the feature map receptive field and extract more sufficient features, and utilizes attention mechanism, residual connection, long skip connection to fully retain sensitive features, which is extracted by the convolutional layer. First, we employ 565 GGN provided by Shanghai Chest Hospital to train and validate ResAANet, so as to obtain a stable model. Then, two groups of data selected from clinical examinations (84 GGN) and lung image database consortium (LIDC) dataset (145 GGN) were employed to validate and evaluate the performance of the proposed method. Finally, we apply the best threshold method to remove false positive regions and obtain optimized results. The average dice similarity coefficient (DSC) of the proposed algorithm on the clinical dataset and LIDC dataset reached 83.46%, 83.26% respectively, the average Jaccard index (IoU) reached 72.39%, 71.56% respectively, and the speed of segmentation reached 0.1 seconds per image. Comparing with other reported methods, our new method could segment GGN accurately, quickly and robustly. It could provide doctors with important information such as nodule size or density, which assist doctors in subsequent diagnosis and treatment.

          Release date:2022-08-22 03:12 Export PDF Favorites Scan
        • SMILESynergy: Anticancer drug synergy prediction based on Transformer pre-trained model

          The synergistic effect of drug combinations can solve the problem of acquired resistance to single drug therapy and has great potential for the treatment of complex diseases such as cancer. In this study, to explore the impact of interactions between different drug molecules on the effect of anticancer drugs, we proposed a Transformer-based deep learning prediction model—SMILESynergy. First, the drug text data—simplified molecular input line entry system (SMILES) were used to represent the drug molecules, and drug molecule isomers were generated through SMILES Enumeration for data augmentation. Then, the attention mechanism in the Transformer was used to encode and decode the drug molecules after data augmentation, and finally, a multi-layer perceptron (MLP) was connected to obtain the synergy value of the drugs. Experimental results showed that our model had a mean squared error of 51.34 in regression analysis, an accuracy of 0.97 in classification analysis, and better predictive performance than the DeepSynergy and MulinputSynergy models. SMILESynergy offers improved predictive performance to assist researchers in rapidly screening optimal drug combinations to improve cancer treatment outcomes.

          Release date:2023-08-23 02:45 Export PDF Favorites Scan
        • Study on speech imagery electroencephalography decoding of Chinese words based on the CAM-Net model

          Speech imagery is an emerging brain-computer interface (BCI) paradigm with potential to provide effective communication for individuals with speech impairments. This study designed a Chinese speech imagery paradigm using three clinically relevant words—“Help me”, “Sit up” and “Turn over”—and collected electroencephalography (EEG) data from 15 healthy subjects. Based on the data, a Channel Attention Multi-Scale Convolutional Neural Network (CAM-Net) decoding algorithm was proposed, which combined multi-scale temporal convolutions with asymmetric spatial convolutions to extract multidimensional EEG features, and incorporated a channel attention mechanism along with a bidirectional long short-term memory network to perform channel weighting and capture temporal dependencies. Experimental results showed that CAM-Net achieved a classification accuracy of 48.54% in the three-class task, outperforming baseline models such as EEGNet and Deep ConvNet, and reached a highest accuracy of 64.17% in the binary classification between “Sit up” and “Turn over”. This work provides a promising approach for future Chinese speech imagery BCI research and applications.

          Release date:2025-06-23 04:09 Export PDF Favorites Scan
        • Multimodal high-grade glioma semantic segmentation network with multi-scale and multi-attention fusion mechanism

          Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.

          Release date:2022-08-22 03:12 Export PDF Favorites Scan
        • A multi-scale feature capturing and spatial position attention model for colorectal polyp image segmentation

          Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.

          Release date:2025-10-21 03:48 Export PDF Favorites Scan
        • Multi-tissue segmentation model of whole slide image of pancreatic cancer based on multi task and attention mechanism

          Accurate segmentation of whole slide images is of great significance for the diagnosis of pancreatic cancer. However, developing an automatic model is challenging due to the complex content, limited samples, and high sample heterogeneity of pathological images. This paper presented a multi-tissue segmentation model for whole slide images of pancreatic cancer. We introduced an attention mechanism in building blocks, and designed a multi-task learning framework as well as proper auxiliary tasks to enhance model performance. The model was trained and tested with the pancreatic cancer pathological image dataset from Shanghai Changhai Hospital. And the data of TCGA, as an external independent validation cohort, was used for external validation. The F1 scores of the model exceeded 0.97 and 0.92 in the internal dataset and external dataset, respectively. Moreover, the generalization performance was also better than the baseline method significantly. These results demonstrate that the proposed model can accurately segment eight kinds of tissue regions in whole slide images of pancreatic cancer, which can provide reliable basis for clinical diagnosis.

          Release date:2023-02-24 06:14 Export PDF Favorites Scan
        • White matter microstructural alterations and classification of patients with different subtypes of attention-deficit/hyperactivity disorder

          Objective To explore the white matter microstructural abnormalities in patients with different subtypes of attention-deficit/hyperactivity disorder (ADHD) and establish a diagnostic classification model. Methods Patients with ADHD admitted to West China Hospital of Sichuan University between January 2019 and September 2021 and healthy controls recruited through advertisement were prospectively selected. All participants underwent diffusion tensor imaging scanning. The whole brain voxel-based analysis was used to compare the diffusion parameter maps of fractional anisotropy (FA) among patients with combined subtype of ADHD (ADHD-C), patients with inattentive subtype of ADHD (ADHD-I) and healthy controls. The support vector machine classifier and feature selection method were used to construct the individual ADHD diagnostic classification model and efficiency was evaluated between each two groups of the ADHD patients and healthy controls. Results A total of 26 ADHD-C patients, 24 ADHD-I patients and 26 healthy controls were included. The three groups showed significant differences in FA values in the bilateral sagittal stratum of temporal lobe (ADHD-C<ADHD-I<healthy controls) and the isthmus of corpus callosum (ADHD-C>ADHD-I>healthy controls) (P<0.005). The direct comparison between the two subtypes of ADHD showed that ADHD-C had higher FA than ADHD-I in the right middle frontal gyrus. The classification model differentiating ADHD-C and ADHD-I showed the highest efficiency, with a total accuracy of 76.0%, sensitivity of 88.5%, and specificity of 70.8%. Conclusions There is both commonality and heterogeneity in white matter microstructural alterations in the two subtypes of patients with ADHD. The white matter damage of the sagittal stratum of temporal lobe and the corpus callosum may be the intrinsic pathophysiological basis of ADHD, while the anomalies of frontal brain region may be the differential point between different subtypes of patients.

          Release date:2023-03-17 09:43 Export PDF Favorites Scan
        • Research on classification of benign and malignant lung nodules based on three-dimensional multi-view squeeze-and-excitation convolutional neural network

          Lung cancer is the most threatening tumor disease to human health. Early detection is crucial to improve the survival rate and recovery rate of lung cancer patients. Existing methods use the two-dimensional multi-view framework to learn lung nodules features and simply integrate multi-view features to achieve the classification of benign and malignant lung nodules. However, these methods suffer from the problems of not capturing the spatial features effectively and ignoring the variability of multi-views. Therefore, this paper proposes a three-dimensional (3D) multi-view convolutional neural network (MVCNN) framework. To further solve the problem of different views in the multi-view model, a 3D multi-view squeeze-and-excitation convolution neural network (MVSECNN) model is constructed by introducing the squeeze-and-excitation (SE) module in the feature fusion stage. Finally, statistical methods are used to analyze model predictions and doctor annotations. In the independent test set, the classification accuracy and sensitivity of the model were 96.04% and 98.59% respectively, which were higher than other state-of-the-art methods. The consistency score between the predictions of the model and the pathological diagnosis results was 0.948, which is significantly higher than that between the doctor annotations and the pathological diagnosis results. The methods presented in this paper can effectively learn the spatial heterogeneity of lung nodules and solve the problem of multi-view differences. At the same time, the classification of benign and malignant lung nodules can be achieved, which is of great significance for assisting doctors in clinical diagnosis.

          Release date:2022-08-22 03:12 Export PDF Favorites Scan
        3 pages Previous 1 2 3 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜