• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Contrastive learning" 2 results
        • A group-level stimulus-aware self-supervised soft contrastive learning framework for electroencephalogram emotion recognition

          To reduce the label dependency of traditional electroencephalogram(EEG) emotion recognition methods and address the limitations of existing contrastive learning approaches in modeling cross-stimulus emotional similarity, this paper proposes a group-level stimulus-aware self-supervised soft contrastive learning framework (GSCL) for EEG emotion recognition. GSCL constructs contrastive learning tasks based on the consistency of subjects' brain activities under identical stimuli and incorporates a soft assignment mechanism, which adaptively adjusts the weights of negative sample pairs according to inter-sample distances to enhance representation quality. Additionally, this study also designs a learnable shuffling-splitting data augmentation method to dynamically optimize data distribution via learnable shuffling parameters. Finally, on the public emotional dataset (DEAP), the proposed method achieves accuracies of 94.91%, 95.29%, and 92.78% for valence, arousal, and four-class classification tasks, respectively; while on the Shanghai Jiao Tong University Emotional EEG Dataset (SEED), its three-class classification accuracy reaches 95.25% as well. These results demonstrate that the proposed method yields higher classification accuracy, offering a new insight for self-supervised EEG emotion recognition.

          Release date:2026-02-06 02:05 Export PDF Favorites Scan
        • Small bowel video keyframe retrieval based on multi-modal contrastive learning

          Retrieving keyframes most relevant to text from small intestine videos with given labels can efficiently and accurately locate pathological regions. However, training directly on raw video data is extremely slow, while learning visual representations from image-text datasets leads to computational inconsistency. To tackle this challenge, a small bowel video keyframe retrieval based on multi-modal contrastive learning (KRCL) is proposed. This framework fully utilizes textual information from video category labels to learn video features closely related to text, while modeling temporal information within a pretrained image-text model. It transfers knowledge learned from image-text multimodal models to the video domain, enabling interaction among medical videos, images, and text data. Experimental results on the hyper-spectral and Kvasir dataset for gastrointestinal disease detection (Hyper-Kvasir) and the Microsoft Research video-to-text (MSR-VTT) retrieval dataset demonstrate the effectiveness and robustness of KRCL, with the proposed method achieving state-of-the-art performance across nearly all evaluation metrics.

          Release date:2025-04-24 04:31 Export PDF Favorites Scan
        1 pages Previous 1 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜