Studies on chatbot health advice (CHA) driven by large language models are rapidly increasing, yet their reporting is marked by significant heterogeneity and incompleteness, which severely limits the scientific credibility and reproducibility of their findings. To promote the effective dissemination and application of the newly released chatbot assessment reporting tool (CHART) statement, this paper provides a systematic interpretation and example-based analysis of the guideline. This paper dissects the 12 main items and 39 sub-items of the CHART checklist on an item-by-item basis, systematically elaborating on the methodological rationale behind each reporting requirement. A particular focus is placed on key requirements tailored to the unique characteristics of generative AI, such as the transparent disclosure of prompt engineering, query strategies, and dialogue safety. To bridge the gap between theory and practice, a high-quality, published CHA study is used as an exemplar to demonstrate the practical application of each reporting item. This interpretation report aims to provide a clear and practical handbook for researchers, journal reviewers, and editors, with the goal of fostering standardized, high-quality development in the field of CHA research and promoting the safe and effective application of AI in healthcare.
Systematic reviews and meta-analyses are essential methods in evidence-based medicine for integrating research evidence and guiding clinical decision-making. However, with the rapid expansion of medical research data, traditional approaches face significant challenges in terms of efficiency, accuracy, and reliability. In recent years, the rapid advancement of artificial intelligence (AI) technologies, particularly in natural language processing (NLP), machine learning (ML), and large language models (LLMs), has provided robust support for automating and intelligentizing systematic reviews and meta-analyses. This paper systematically reviews the progress of AI applications in these fields, tracing the evolution from traditional tools to intelligent platforms, and analyzes the functional characteristics, application scenarios, and limitations of existing AI-driven tools. Furthermore, it explores the challenges posed by AI in terms of adaptation to the medical field, multimodal data processing, and ethical transparency, while offering potential solutions and optimization strategies. Looking ahead, with the continuous optimization of technology, enhanced data sharing, and the establishment of industry standards, AI is expected to significantly improve the efficiency and quality of systematic reviews and meta-analyses, driving the transition from "tool-driven" to "intelligent collaboration." The deep integration of AI not only injects innovative momentum into evidence-based medicine but also reshapes its methodological foundation, laying a solid basis for a more intelligent, equitable, and efficient future.
The reporting quality of systematic reviews and meta-analyses is fundamental to the value of evidence in evidence-based medicine. As the internationally endorsed standard, the PRISMA statement and its extensive suite of extensions are crucial for standardizing reporting and enhancing transparency. However, a comprehensive, systematic understanding of its entire framework and profound challenges remains inadequate in the academic community. This review aims to systematically delineate and deeply analyze the complete PRISMA reporting guideline framework, evaluate its application value, uncover its implementation challenges, and forecast its future development directions. This paper traces PRISMA's evolution from its predecessor, QUOROM, to PRISMA 2020, highlighting key shifts in its core principles. It systematically constructs a multi-dimensional framework for the PRISMA family for the first time, categorizing its extensions by foundational versions, study design/analysis types, reporting process stages, disciplinary domains, and specific areas of focus, complemented by a forward-looking analysis of tools currently under development. The review delves into the deep-seated challenges in PRISMA's implementation, including misconceptions, inconsistent application, cross-disciplinary adaptability, and methodological limitations. It proposes that its future lies in balancing standardization with flexibility, enhancing globalized application, and deeply integrating with emerging technologies like artificial intelligence. The PRISMA framework has evolved from a mere reporting checklist into a core methodological architecture that promotes standardization throughout the entire evidence synthesis lifecycle. The continuous optimization and proper application of this framework are of critical theoretical and practical significance for enhancing the overall quality and impact of evidence synthesis research globally.