• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Deep learning" 76 results
        • Application of generative adversarial network in magnetic resonance image reconstruction

          Magnetic resonance imaging (MRI) is an important medical imaging method, whose major limitation is its long scan time due to the imaging mechanism, increasing patients’ cost and waiting time for the examination. Currently, parallel imaging (PI) and compress sensing (CS) together with other reconstruction technologies have been proposed to accelerate image acquisition. However, the image quality of PI and CS depends on the image reconstruction algorithms, which is far from satisfying in respect to both the image quality and the reconstruction speed. In recent years, image reconstruction based on generative adversarial network (GAN) has become a research hotspot in the field of magnetic resonance imaging because of its excellent performance. In this review, we summarized the recent development of application of GAN in MRI reconstruction in both single- and multi-modality acceleration, hoping to provide a useful reference for interested researchers. In addition, we analyzed the characteristics and limitations of existing technologies and forecasted some development trends in this field.

          Release date:2023-08-23 02:45 Export PDF Favorites Scan
        • CT and MRI fusion based on generative adversarial network and convolutional neural networks under image enhancement

          Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, QAB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.

          Release date:2023-06-25 02:49 Export PDF Favorites Scan
        • Automatic detection and visualization of myocardial infarction in electrocardiograms based on an interpretable deep learning model

          Automated detection of myocardial infarction (MI) is crucial for preventing sudden cardiac death and enabling early intervention in cardiovascular diseases. This paper proposes a deep learning framework based on a lightweight convolutional neural network (CNN) combined with one-dimensional gradient-weighted class activation mapping (1D Grad-CAM) for the automated detection of MI and the visualization of key waveform features in single-lead electrocardiograms (ECGs). The proposed method was evaluated using a total of 432 records from the Physikalisch-Technische Bundesanstalt Diagnostic ECG Database (PTBDB) and the Normal Sinus Rhythm Database (NSRDB), comprising 334 MI and 98 normal ECGs. Experimental results demonstrated that the model achieved an accuracy, sensitivity, and specificity of 95.75%, 96.03%, and 95.47%, respectively, in MI detection. Furthermore, the visualization results indicated that the model’s decision-making process aligned closely with clinically critical features, including pathological Q waves, ST-segment elevation, and T-wave inversion. This study confirms that the proposed deep learning algorithm combined with explainable technology performs effectively in the intelligent diagnosis of MI and the visualization of critical ECG waveforms, demonstrating its potential as a useful tool for early MI risk assessment and computer-aided diagnosis.

          Release date: Export PDF Favorites Scan
        • Review on ultrasonographic diagnosis of thyroid diseases based on deep learning

          In recent years, the incidence of thyroid diseases has increased significantly and ultrasound examination is the first choice for the diagnosis of thyroid diseases. At the same time, the level of medical image analysis based on deep learning has been rapidly improved. Ultrasonic image analysis has made a series of milestone breakthroughs, and deep learning algorithms have shown strong performance in the field of medical image segmentation and classification. This article first elaborates on the application of deep learning algorithms in thyroid ultrasound image segmentation, feature extraction, and classification differentiation. Secondly, it summarizes the algorithms for deep learning processing multimodal ultrasound images. Finally, it points out the problems in thyroid ultrasound image diagnosis at the current stage and looks forward to future development directions. This study can promote the application of deep learning in clinical ultrasound image diagnosis of thyroid, and provide reference for doctors to diagnose thyroid disease.

          Release date:2023-10-20 04:48 Export PDF Favorites Scan
        • Recurrence prediction of gastric cancer based on multi-resolution feature fusion and context information

          Pathological images of gastric cancer serve as the gold standard for diagnosing this malignancy. However, the recurrence prediction task often encounters challenges such as insignificant morphological features of the lesions, insufficient fusion of multi-resolution features, and inability to leverage contextual information effectively. To address these issues, a three-stage recurrence prediction method based on pathological images of gastric cancer is proposed. In the first stage, the self-supervised learning framework SimCLR was adopted to train low-resolution patch images, aiming to diminish the interdependence among diverse tissue images and yield decoupled enhanced features. In the second stage, the obtained low-resolution enhanced features were fused with the corresponding high-resolution unenhanced features to achieve feature complementation across multiple resolutions. In the third stage, to address the position encoding difficulty caused by the large difference in the number of patch images, we performed position encoding based on multi-scale local neighborhoods and employed self-attention mechanism to obtain features with contextual information. The resulting contextual features were further combined with the local features extracted by the convolutional neural network. The evaluation results on clinically collected data showed that, compared with the best performance of traditional methods, the proposed network provided the best accuracy and area under curve (AUC), which were improved by 7.63% and 4.51%, respectively. These results have effectively validated the usefulness of this method in predicting gastric cancer recurrence.

          Release date:2024-10-22 02:39 Export PDF Favorites Scan
        • Screening and diagnostic system construction for optic neuritis and non-arteritic anterior ischemic optic neuropathy based on color fundus images using deep learning

          Objective To construct and evaluate a screening and diagnostic system based on color fundus images and artificial intelligence (AI)-assisted screening for optic neuritis (ON) and non-arteritic anterior ischemic optic neuropathy (NAION). MethodsA diagnostic test study. From 2016 to 2020, 178 cases 267 eyes of NAION patients (NAION group) and 204 cases 346 eyes of ON patients (ON group) were examined and diagnosed in Zhongshan Ophthalmic Center of Sun Yat-sen University; 513 healthy individuals of 1 160 eyes (the normal control group) with normal fundus by visual acuity, intraocular pressure and optical coherence tomography examination were collected from 2018 to 2020. All 2 909 color fundus images were as the data set of the screening and diagnosis system, including 730, 805, and 1 374 images for the NAION group, ON group, and normal control group, respectively. The correctly labeled color fundus images were used as input data, and the EfficientNet-B0 algorithm was selected for model training and validation. Finally, three systems for screening abnormal optic discs, ON, and NAION were constructed. The subject operating characteristic (ROC) curve, area under the ROC (AUC), accuracy, sensitivity, specificity, and heat map were used as indicators of diagnostic efficacy. ResultsIn the test data set, the AUC for diagnosing the presence of an abnormal optic disc, the presence of ON, and the presence of NAION were 0.967 [95% confidence interval (CI) 0.947-0.980], 0.964 (95%CI 0.938-0.979), and 0.979 (95%CI 0.958-0.989), respectively. The activation area of the systems were mainly located in the optic disc area in the decision-making process. ConclusionAbnormal optic disc, ON and NAION, and screening diagnostic systems based on color fundus images have shown accurate and efficient diagnostic performance.

          Release date:2023-02-17 09:35 Export PDF Favorites Scan
        • Machine learning-based diagnostic test accuracy research: measurement indicators

          Machine learning-based diagnostic tests have certain differences of measurement indicators with traditional diagnostic tests. In this paper, we elaborate the definitions, calculation methods and statistical inferences of common measurement indicators of machine learning-based diagnosis models in detail. We hope that this paper will be helpful for clinical researchers to better evaluate machine learning diagnostic models.

          Release date:2023-09-15 03:49 Export PDF Favorites Scan
        • Research on bark-frequency spectral coefficients heart sound classification algorithm based on multiple window time-frequency reassignment

          The multi-window time-frequency reassignment helps to improve the time-frequency resolution of bark-frequency spectral coefficient (BFSC) analysis of heart sounds. For this purpose, a new heart sound classification algorithm combining feature extraction based on multi-window time-frequency reassignment BFSC with deep learning was proposed in this paper. Firstly, the randomly intercepted heart sound segments are preprocessed with amplitude normalization, the heart sounds were framed and time-frequency rearrangement based on short-time Fourier transforms were computed using multiple orthogonal windows. A smooth spectrum estimate is calculated by arithmetic averaging each of the obtained independent spectra. Finally, the BFSC of reassignment spectrum is extracted as a feature by the Bark filter bank. In this paper, convolutional network and recurrent neural network are used as classifiers for model comparison and performance evaluation of the extracted features. Eventually, the multi-window time-frequency rearrangement improved BFSC method extracts more discriminative features, with a binary classification accuracy of 0.936, a sensitivity of 0.946, and a specificity of 0.922. These results present that the algorithm proposed in this paper does not need to segment the heart sounds and randomly intercepts the heart sound segments, which greatly simplifies the computational process and is expected to be used for screening of congenital heart disease.

          Release date:2024-04-24 09:40 Export PDF Favorites Scan
        • Identification of kidney stone types by deep learning integrated with radiomics features

          Currently, the types of kidney stones before surgery are mainly identified by human beings, which directly leads to the problems of low classification accuracy and inconsistent diagnostic results due to the reliance on human knowledge. To address this issue, this paper proposes a framework for identifying types of kidney stones based on the combination of radiomics and deep learning, aiming to achieve automated preoperative classification of kidney stones with high accuracy. Firstly, radiomics methods are employed to extract radiomics features released from the shallow layers of a three-dimensional (3D) convolutional neural network, which are then fused with the deep features of the convolutional neural network. Subsequently, the fused features are subjected to regularization, least absolute shrinkage and selection operator (LASSO) processing. Finally, a light gradient boosting machine (LightGBM) is utilized for the identification of infectious and non-infectious kidney stones. The experimental results indicate that the proposed framework achieves an accuracy rate of 84.5% for preoperative identification of kidney stone types. This framework can effectively distinguish between infectious and non-infectious kidney stones, providing valuable assistance in the formulation of preoperative treatment plans and the rehabilitation of patients after surgery.

          Release date:2024-12-27 03:50 Export PDF Favorites Scan
        • SMILESynergy: Anticancer drug synergy prediction based on Transformer pre-trained model

          The synergistic effect of drug combinations can solve the problem of acquired resistance to single drug therapy and has great potential for the treatment of complex diseases such as cancer. In this study, to explore the impact of interactions between different drug molecules on the effect of anticancer drugs, we proposed a Transformer-based deep learning prediction model—SMILESynergy. First, the drug text data—simplified molecular input line entry system (SMILES) were used to represent the drug molecules, and drug molecule isomers were generated through SMILES Enumeration for data augmentation. Then, the attention mechanism in the Transformer was used to encode and decode the drug molecules after data augmentation, and finally, a multi-layer perceptron (MLP) was connected to obtain the synergy value of the drugs. Experimental results showed that our model had a mean squared error of 51.34 in regression analysis, an accuracy of 0.97 in classification analysis, and better predictive performance than the DeepSynergy and MulinputSynergy models. SMILESynergy offers improved predictive performance to assist researchers in rapidly screening optimal drug combinations to improve cancer treatment outcomes.

          Release date:2023-08-23 02:45 Export PDF Favorites Scan
        8 pages Previous 1 2 3 ... 8 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜