As technology continues to advance and artificial intelligence technology is widely applied, ChatGPT (Chat Generative Pre-trained Transformer) is beginning to make its mark in the field of healthcare consultation services. This article summarizes the current applications of ChatGPT in healthcare consultation services, reviewing its roles in four areas: dissemination of disease knowledge, assisting in the understanding of medical information, personalized health education and guidance, and preliminary diagnostic assistance and medical guidance. It also explores the development prospects of ChatGPT in healthcare consultation services, as well as the challenges and ethical dilemmas it faces in this field.
Large Language Models (LLMs) are highly sophisticated deep learning models pre-trained on massive datasets, with ChatGPT representing a prominent application of LLMs in the field of generative models. Since the release of ChatGPT at the end of 2022, generative chatbots have become widely employed across various medical disciplines. As a crucial discipline guiding clinical practices, the usage of generative chatbots like ChatGPT in Evidence-Based Medicine (EBM) is gradually increasing. However, the potential, challenges, and intricacies of their application in the domain of EBM remain unclear. This paper aims to explore and discuss the prospects, challenges, and considerations associated with the application of ChatGPT in the field of EBM through a review of relevant literature. The discussion spans four aspects: evidence generation, synthesis, assessment, dissemination, and implementation, providing researchers with insights into the latest developments and future research suggestions.
As one of the hot topics in the field of artificial intelligence, large language models are being applied in various domains, including medical research. ChatGPT (Chat Generative Pre-trained Transformer), as one of the most representative and leading large language models, has gained popularity among researchers due to its logical coherence and natural language generation capabilities. This article reviews the applications and limitations of ChatGPT in three key areas of medical research: scientific writing, data analysis, and drug development. Furthermore, it explores future development trends and provides recommendations for improvement, offering a reference for the application of ChatGPT in medical research.
As one of the hottest artificial intelligence technologies currently, ChatGPT, as one of the hottest artificial intelligence technologies today, plays a significant role in advancing the field of evidence-based medicine, particularly in expanding the sources of original evidence, enhancing the efficiency of evidence acquisition, aiding in shared decision-making between doctors and patients, and promoting education in evidence-based medicine and public science education. Presently, ChatGPT is in its "technological budding phase" and it is crucial to be wary of the risks it brings, such as "evidence contamination", algorithmic black boxes, security vulnerabilities, and the digital divide. To balance the positive effects and potential risks of ChatGPT in the realm of evidence-based medicine, we offer countermeasures and suggestions from the perspectives of ChatGPT's ethical standards, evidence sources, expert verification, and usage norms.
With the rapid development of artificial intelligence and natural language processing technologies, ChatGPT (Chat Generative Pre-trained Transformer) has been preliminarily applied in the medical domain. ChatGPT has the advantage of generating coherent and logically reasonable natural language based on big data, and some scholars have conducted preliminary discussions on its application and effectiveness in the medical domain. This article will summarize the application progress of ChatGPT in medical education, assisted clinical decision-making, and medical research based on the authors’ experience in applying ChatGPT, and look forward to its future development trends. At the same time, this article will also conduct an in-depth analysis of the challenges and limitations of ChatGPT in practical medical applications, laying the foundation for the standardized use of ChatGPT in the medical domain.
Following the rapid advancement of artificial intelligence technologies, especially the development of large language models like ChatGPT, the field of medical clinical practice is undergoing an unprecedented technological revolution. These advanced technologies, through efficient processing and analysis of large datasets, not only provide medical professionals with auxiliary diagnoses and treatment suggestions but also significantly enhance the quality and efficiency of medical education. This study conducts a comprehensive analysis and review of the applications of large language models in various aspects, including clinical inquiry, history collection, medical literature writing, clinical decision support, optimization of medical portal websites, patient health management, medical education, academic research, and scientific writing. However, the application of these technologies is not without flaws and presents several limitations and ethical challenges. This paper focuses on challenges related to technological errors, academic dishonesty, abuse risks, over-reliance, possibilities of misdiagnosis and treatment errors, and issues of accountability. In conclusion, large language models demonstrate tremendous potential in the integration and advancement of medical practices. Nevertheless, while fully harnessing the benefits brought by ChatGPT, it is essential to acknowledge and address these ethical challenges to ensure that the application of ChatGPT in the medical field is responsible and effective.
Objective To evaluate the accuracy of three large language models (LLMs), ChatGPT, Grok, and DeepSeek, in predicting the natural outcome of pediatric ventricular septal defect (VSD) and their discrepancies with actual clinical outcomes, providing insights into whether LLMs can assist clinicians in providing personalized management recommendations. MethodsA retrospective analysis of clinical data from pediatric patients with VSD admitted to Children's Hospital of Nanjing Medical University between October and December 2020. The VSD severity, spontaneous closure probability and surgical necessity were evaluated by ChatGPT, Grok, DeepSeek, and the expert panel, respectively. Intergroup differences were analyzed and also compared with the actual outcomes. The stability of model performance was compared based on three repeated assessments by LLMs. Results A total of 146 children were enrolled, including 87 (59.6%) males and 59 (40.4%) females, with a median age at first diagnosis of 2.0 months (IQR: 1.1-3.4). Significant differences were observed between the Grok group and the expert panel in assessing the probability of spontaneous closure and the necessity of surgery (P=0.01, 0.02). The ChatGPT group also differed from the expert panel in evaluating the necessity of surgery (P=0.05). In comparison with the actual clinical outcomes, only the Grok group showed a significant difference (P<0.05), while ChatGPT achieved the highest consistency between predicted outcomes and actual outcomes. Intra-group analysis of three repeated assessments in the LLMs groups showed no statistically significant differences (all P>0.05). Conclusion LLMs demonstrate potential and high stability in predicting the natural outcome of VSD. In particular, ChatGPT shows the highest consistency between its assessments and actual outcomes. LLMs can serve as an auxiliary tool to support the formulation of personalized management strategy.