• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Medical image" 25 results
        • Segmentation of anterior cruciate ligament images by fusing inflated convolution and residual hybrid attention

          Aiming at the problems of low accuracy and large difference of segmentation boundary distance in anterior cruciate ligament (ACL) image segmentation of knee joint, this paper proposes an ACL image segmentation model by fusing dilated convolution and residual hybrid attention U-shaped network (DRH-UNet). The proposed model builds upon the U-shaped network (U-Net) by incorporating dilated convolutions to expand the receptive field, enabling a better understanding of the contextual relationships within the image. Additionally, a residual hybrid attention block is designed in the skip connections to enhance the expression of critical features in key regions and reduce the semantic gap, thereby improving the representation capability for the ACL area. This study constructs an enhanced annotated ACL dataset based on the publicly available Magnetic Resonance Imaging Network (MRNet) dataset. The proposed method is validated on this dataset, and the experimental results demonstrate that the DRH-UNet model achieves a Dice similarity coefficient (DSC) of (88.01±1.57)% and a Hausdorff distance (HD) of 5.16±0.85, outperforming other ACL segmentation methods. The proposed approach further enhances the segmentation accuracy of ACL, providing valuable assistance for subsequent clinical diagnosis by physicians.

          Release date:2025-04-24 04:31 Export PDF Favorites Scan
        • Brain midline segmentation method based on prior knowledge and path optimization

          To address the challenges faced by current brain midline segmentation techniques, such as insufficient accuracy and poor segmentation continuity, this paper proposes a deep learning network model based on a two-stage framework. On the first stage of the model, prior knowledge of the feature consistency of adjacent brain midline slices under normal and pathological conditions is utilized. Associated midline slices are selected through slice similarity analysis, and a novel feature weighting strategy is adopted to collaboratively fuse the overall change characteristics and spatial information of these associated slices, thereby enhancing the feature representation of the brain midline in the intracranial region. On the second stage, the optimal path search strategy for the brain midline is employed based on the network output probability map, which effectively addresses the problem of discontinuous midline segmentation. The method proposed in this paper achieved satisfactory results on the CQ500 dataset provided by the Center for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India. The Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), and normalized surface Dice (NSD) were 67.38 ± 10.49, 24.22 ± 24.84, 1.33 ± 1.83, and 0.82 ± 0.09, respectively. The experimental results demonstrate that the proposed method can fully utilize the prior knowledge of medical images to effectively achieve accurate segmentation of the brain midline, providing valuable assistance for subsequent identification of the brain midline by clinicians.

          Release date:2025-08-19 11:47 Export PDF Favorites Scan
        • Research progress on the application of artificial intelligence in the screening and treatment of retinopathy of prematurity

          Retinopathy of prematurity (ROP) is a major cause of vision loss and blindness among premature infants. Timely screening, diagnosis, and intervention can effectively prevent the deterioration of ROP. However, there are several challenges in ROP diagnosis globally, including high subjectivity, low screening efficiency, regional disparities in screening coverage, and severe shortage of pediatric ophthalmologists. The application of artificial intelligence (AI) as an assistive tool for diagnosis or an automated method for ROP diagnosis can improve the efficiency and objectivity of ROP diagnosis, expand screening coverage, and enable automated screening and quantified diagnostic results. In the global environment that emphasizes the development and application of medical imaging AI, developing more accurate diagnostic networks, exploring more effective AI-assisted diagnosis methods, and enhancing the interpretability of AI-assisted diagnosis, can accelerate the improvement of AI policies of ROP and the implementation of AI products, promoting the development of ROP diagnosis and treatment.

          Release date:2023-12-27 08:53 Export PDF Favorites Scan
        • Study on automatic and rapid diagnosis of distal radius fracture by X-ray

          This article aims to combine deep learning with image analysis technology and propose an effective classification method for distal radius fracture types. Firstly, an extended U-Net three-layer cascaded segmentation network was used to accurately segment the most important joint surface and non joint surface areas for identifying fractures. Then, the images of the joint surface area and non joint surface area separately were classified and trained to distinguish fractures. Finally, based on the classification results of the two images, the normal or ABC fracture classification results could be comprehensively determined. The accuracy rates of normal, A-type, B-type, and C-type fracture on the test set were 0.99, 0.92, 0.91, and 0.82, respectively. For orthopedic medical experts, the average recognition accuracy rates were 0.98, 0.90, 0.87, and 0.81, respectively. The proposed automatic recognition method is generally better than experts, and can be used for preliminary auxiliary diagnosis of distal radius fractures in scenarios without expert participation.

          Release date:2024-10-22 02:33 Export PDF Favorites Scan
        • Multi-scale medical image segmentation based on pixel encoding and spatial attention mechanism

          In response to the issues of single-scale information loss and large model parameter size during the sampling process in U-Net and its variants for medical image segmentation, this paper proposes a multi-scale medical image segmentation method based on pixel encoding and spatial attention. Firstly, by redesigning the input strategy of the Transformer structure, a pixel encoding module is introduced to enable the model to extract global semantic information from multi-scale image features, obtaining richer feature information. Additionally, deformable convolutions are incorporated into the Transformer module to accelerate convergence speed and improve module performance. Secondly, a spatial attention module with residual connections is introduced to allow the model to focus on the foreground information of the fused feature maps. Finally, through ablation experiments, the network is lightweighted to enhance segmentation accuracy and accelerate model convergence. The proposed algorithm achieves satisfactory results on the Synapse dataset, an official public dataset for multi-organ segmentation provided by the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), with Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) scores of 77.65 and 18.34, respectively. The experimental results demonstrate that the proposed algorithm can enhance multi-organ segmentation performance, potentially filling the gap in multi-scale medical image segmentation algorithms, and providing assistance for professional physicians in diagnosis.

          Release date:2024-06-21 05:13 Export PDF Favorites Scan
        • Full-size diffusion model for adaptive feature medical image fusion

          To address issues such as loss of detailed information, blurred target boundaries, and unclear structural hierarchy in medical image fusion, this paper proposes an adaptive feature medical image fusion network based on a full-scale diffusion model. First, a region-level feature map is generated using a kernel-based saliency map to enhance local features and boundary details. Then, a full-scale diffusion feature extraction network is employed for global feature extraction, alongside a multi-scale denoising U-shaped network designed to fully capture cross-layer information. A multi-scale feature integration module is introduced to reinforce texture details and structural information extracted by the encoder. Finally, an adaptive fusion scheme is applied to progressively fuse region-level features, global features, and source images layer by layer, enhancing the preservation of detail information. To validate the effectiveness of the proposed method, this paper validates the proposed model on the publicly available Harvard dataset and an abdominal dataset. By comparing with nine other representative image fusion methods, the proposed approach achieved improvements across seven evaluation metrics. The results demonstrate that the proposed method effectively extracts both global and local features of medical images, enhances texture details and target boundary clarity, and generates fusion image with high contrast and rich information, providing more reliable support for subsequent clinical diagnosis.

          Release date:2025-10-21 03:48 Export PDF Favorites Scan
        • Medical image super-resolution reconstruction via multi-scale information distillation network under multi-scale geometric transform domain

          High resolution (HR) magnetic resonance images (MRI) or computed tomography (CT) images can provide clearer anatomical details of human body, which facilitates early diagnosis of the diseases. However, due to the imaging system, imaging environment and human factors, it is difficult to obtain clear high-resolution images. In this paper, we proposed a novel medical image super resolution (SR) reconstruction method via multi-scale information distillation (MSID) network in the non-subsampled shearlet transform (NSST) domain, namely NSST-MSID network. We first proposed a MSID network that mainly consisted of a series of stacked MSID blocks to fully exploit features from images and effectively restore the low resolution (LR) images to HR images. In addition, most previous methods predict the HR images in the spatial domain, producing over-smoothed outputs while losing texture details. Thus, we viewed the medical image SR task as the prediction of NSST coefficients, which make further MSID network keep richer structure details than that in spatial domain. Finally, the experimental results on our constructed medical image datasets demonstrated that the proposed method was capable of obtaining better peak signal to noise ratio (PSNR), structural similarity (SSIM) and root mean square error (RMSE) values and keeping global topological structure and local texture detail better than other outstanding methods, which achieves good medical image reconstruction effect.

          Release date:2022-12-28 01:34 Export PDF Favorites Scan
        • Review of application of U-Net and Transformer in colon polyp image segmentation

          Colorectal cancer typically originates from the malignant transformation of colonic polyps, making the automatic and accurate segmentation of colonic polyps crucial for clinical diagnosis. Deep learning techniques such as U-Net and Transformer can effectively extract implicit features from medical images, and thus have significant potential in colonic polyp image segmentation. This paper first introduced commonly used evaluation metrics and datasets for colonic polyp segmentation. It then reviewed the application of segmentation models based on U-Net, Transformer, and their hybrid approaches in this domain. Finally, it summarized the improvement methods, advantages, and limitations of polyp segmentation algorithms, discussed the challenges faced by U-Net- and Transformer-based models, and provided an outlook on future research directions in this field.

          Release date:2025-12-22 10:16 Export PDF Favorites Scan
        • Medical image segmentation data augmentation method based on channel weight and data-efficient features

          In computer-aided medical diagnosis, obtaining labeled medical image data is expensive, while there is a high demand for model interpretability. However, most deep learning models currently require a large amount of data and lack interpretability. To address these challenges, this paper proposes a novel data augmentation method for medical image segmentation. The uniqueness and advantages of this method lie in the utilization of gradient-weighted class activation mapping to extract data efficient features, which are then fused with the original image. Subsequently, a new channel weight feature extractor is constructed to learn the weights between different channels. This approach achieves non-destructive data augmentation effects, enhancing the model's performance, data efficiency, and interpretability. Applying the method of this paper to the Hyper-Kvasir dataset, the intersection over union (IoU) and Dice of the U-net were improved, respectively; and on the ISIC-Archive dataset, the IoU and Dice of the DeepLabV3+ were also improved respectively. Furthermore, even when the training data is reduced to 70 %, the proposed method can still achieve performance that is 95 % of that achieved with the entire dataset, indicating its good data efficiency. Moreover, the data-efficient features used in the method have interpretable information built-in, which enhances the interpretability of the model. The method has excellent universality, is plug-and-play, applicable to various segmentation methods, and does not require modification of the network structure, thus it is easy to integrate into existing medical image segmentation method, enhancing the convenience of future research and applications.

          Release date:2024-04-24 09:50 Export PDF Favorites Scan
        • Brain magnetic resonance image registration based on parallel lightweight convolution and multi-scale fusion

          Medical image registration plays an important role in medical diagnosis and treatment planning. However, the current registration methods based on deep learning still face some challenges, such as insufficient ability to extract global information, large number of network model parameters, slow reasoning speed and so on. Therefore, this paper proposed a new model LCU-Net, which used parallel lightweight convolution to improve the ability of global information extraction. The problem of large number of network parameters and slow inference speed was solved by multi-scale fusion. The experimental results showed that the Dice coefficient of LCU-Net reached 0.823, the Hausdorff distance was 1.258, and the number of network parameters was reduced by about one quarter compared with that before multi-scale fusion. The proposed algorithm shows remarkable advantages in medical image registration tasks, and it not only surpasses the existing comparison algorithms in performance, but also has excellent generalization performance and wide application prospects.

          Release date: Export PDF Favorites Scan
        3 pages Previous 1 2 3 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜