• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Cross modal" 2 results
        • Cross modal medical image online hash retrieval based on online semantic similarity

          Online hashing methods are receiving increasing attention in cross modal medical image retrieval research. However, existing online methods often lack the learning ability to maintain semantic correlation between new and existing data. To this end, we proposed online semantic similarity cross-modal hashing (OSCMH) learning framework to incrementally learn compact binary hash codes of medical stream data. Within it, a sparse representation of existing data based on online anchor datasets was designed to avoid semantic forgetting of the data and adaptively update hash codes, which effectively maintained semantic correlation between existing and arriving data and reduced information loss as well as improved training efficiency. Besides, an online discrete optimization method was proposed to solve the binary optimization problem of hash code by incrementally updating hash function and optimizing hash code on medical stream data. Compared with existing online or offline hashing methods, the proposed algorithm achieved average retrieval accuracy improvements of 12.5% and 14.3% on two datasets, respectively, effectively enhancing the retrieval efficiency in the field of medical images.

          Release date:2025-04-24 04:31 Export PDF Favorites Scan
        • Shape-aware cross-modal domain adaptive segmentation model

          Cross-modal unsupervised domain adaptation (UDA) aims to transfer segmentation models trained on a labeled source modality to an unlabeled target modality. However, existing methods often fail to fully exploit shape priors and intermediate feature representations, resulting in limited generalization ability of the model in cross-modal transfer tasks. To address this challenge, we propose a segmentation model based on shape-aware adaptive weighting (SAWS) that enhance the model's ability to perceive the target area and capture global and local information. Specifically, we design a multi-angle strip-shaped shape perception (MSSP) module that captures shape features from multiple orientations through an angular pooling strategy, improving structural modeling under cross-modal settings. In addition, an adaptive weighted hierarchical contrastive (AWHC) loss is introduced to fully leverage intermediate features and enhance segmentation accuracy for small target structures. The proposed method is evaluated on the multi-modality whole heart segmentation (MMWHS) dataset. Experimental results demonstrate that SAWS achieves superior performance in cross-modal cardiac segmentation tasks, with a Dice score (Dice) of 70.1% and an average symmetric surface distance (ASSD) of 4.0 for the computed tomography (CT)→magnetic resonance imaging (MRI) task, and a Dice of 83.8% and ASSD of 3.7 for the MRI→CT task, outperforming existing state-of-the-art methods. Overall, this study proposes a cross-modal medical image segmentation method with shape-aware, which effectively improves the structure-aware ability and generalization performance of the UDA model.

          Release date:2025-12-22 10:16 Export PDF Favorites Scan
        1 pages Previous 1 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜