• <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
      <b id="1ykh9"><small id="1ykh9"></small></b>
    1. <b id="1ykh9"></b>

      1. <button id="1ykh9"></button>
        <video id="1ykh9"></video>
      2. west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "large language models" 3 results
        • Research progress of ChatGPT in application of medical health consultation services

          As technology continues to advance and artificial intelligence technology is widely applied, ChatGPT (Chat Generative Pre-trained Transformer) is beginning to make its mark in the field of healthcare consultation services. This article summarizes the current applications of ChatGPT in healthcare consultation services, reviewing its roles in four areas: dissemination of disease knowledge, assisting in the understanding of medical information, personalized health education and guidance, and preliminary diagnostic assistance and medical guidance. It also explores the development prospects of ChatGPT in healthcare consultation services, as well as the challenges and ethical dilemmas it faces in this field.

          Release date:2025-07-29 05:02 Export PDF Favorites Scan
        • Ruibin Agent versus mainstream large language models: A comparative study on medical literature comprehension with esophageal cancer as a case study

          ObjectiveTo explore the application value of artificial intelligence in medical research assistance, and analyze the key paths to achieve precise execution of model instructions, improvement of model interpretation completeness, and control of hallucinations. MethodsTaking esophageal cancer research as the scenario, five types of literature including research articles, case reports, reviews, editorials, and guidelines were selected for model interpretation tests. The model performance was systematically evaluated from five dimensions: recognition accuracy, format accuracy, instruction execution accuracy, content reliability rate, and content completeness index. The performance differences of Ruibin Agent, GPT-4o, Claude 3.7 Sonnet, DeepSeek V3, and DouBao-pro models in medical literature interpretation tasks were compared. ResultsA total of 15 studies were included, with 3 studies of each type. The five models collectively conducted 1 875 tests. Due to the poor recognition accuracy of the editorial type, the overall recognition accuracy of Ruibin Agent was significantly lower than other models (92.0% vs. 100.0%, P<0.001). In terms of format accuracy, Ruibin Agent was significantly better than Claude 3.7 Sonnet (98.7% vs. 92.0%, P=0.002) and GPT-4o (98.7% vs. 78.9%, P<0.001). In terms of instruction execution accuracy, Ruibin Agent was better than GPT-4o (97.3% vs. 80.0%, P<0.001). In terms of content reliability rate, Ruibin Agent was significantly lower than Claude 3.7 Sonnet (84.0% vs. 92.0%, P=0.010) and DeepSeek V3 (84.0% vs. 94.7%, P<0.001). In terms of content completeness index, the median scores of Ruibin Agent, GPT-4o, Claude 3.7 Sonnet, DeepSeek V3, and DouBao-pro were 0.71, 0.60, 0.85, 0.74, and 0.77, respectively. ConclusionRuibin Agent has significant advantages in terms of formatted interpretation of medical literature and instruction execution accuracy. In the future, it is necessary to focus on optimizing the recognition ability of editorial types, strengthening the coverage ability of core elements of various types of literature to improve interpretation completeness, and improving content reliability through optimizing the confidence mechanism to ensure the rigor of medical literature interpretation.

          Release date:2025-09-22 05:53 Export PDF Favorites Scan
        • The application value of large language models in predicting the natural outcome of ventricular septal defect

          Objective To evaluate the accuracy of three large language models (LLMs), ChatGPT, Grok, and DeepSeek, in predicting the natural outcome of pediatric ventricular septal defect (VSD) and their discrepancies with actual clinical outcomes, providing insights into whether LLMs can assist clinicians in providing personalized management recommendations. MethodsA retrospective analysis of clinical data from pediatric patients with VSD admitted to Children's Hospital of Nanjing Medical University between October and December 2020. The VSD severity, spontaneous closure probability and surgical necessity were evaluated by ChatGPT, Grok, DeepSeek, and the expert panel, respectively. Intergroup differences were analyzed and also compared with the actual outcomes. The stability of model performance was compared based on three repeated assessments by LLMs. Results A total of 146 children were enrolled, including 87 (59.6%) males and 59 (40.4%) females, with a median age at first diagnosis of 2.0 months (IQR: 1.1-3.4). Significant differences were observed between the Grok group and the expert panel in assessing the probability of spontaneous closure and the necessity of surgery (P=0.01, 0.02). The ChatGPT group also differed from the expert panel in evaluating the necessity of surgery (P=0.05). In comparison with the actual clinical outcomes, only the Grok group showed a significant difference (P<0.05), while ChatGPT achieved the highest consistency between predicted outcomes and actual outcomes. Intra-group analysis of three repeated assessments in the LLMs groups showed no statistically significant differences (all P>0.05). Conclusion LLMs demonstrate potential and high stability in predicting the natural outcome of VSD. In particular, ChatGPT shows the highest consistency between its assessments and actual outcomes. LLMs can serve as an auxiliary tool to support the formulation of personalized management strategy.

          Release date: Export PDF Favorites Scan
        1 pages Previous 1 Next

        Format

        Content

      3. <xmp id="1ykh9"><source id="1ykh9"><mark id="1ykh9"></mark></source></xmp>
          <b id="1ykh9"><small id="1ykh9"></small></b>
        1. <b id="1ykh9"></b>

          1. <button id="1ykh9"></button>
            <video id="1ykh9"></video>
          2. 射丝袜