Jawanpuria, AnkitaBehera, Aruna RaniDash, ChinmayaRahman, Mohammad Hifz Ur2024-06-192024-06-192024-06European Journal of Clinical and Experimental Medicine T. 22, z. 2 (2024), s. 347-352https://repozytorium.ur.edu.pl/handle/item/10607The study was approved by Institutional Ethical Committee with No. MTMC/IEC/2023/11.Introduction and aim. An AI model like ChatGPT is a good source of knowledge. We can explore the potential of AI models to complement the expertise of healthcare professionals by providing real-time, evidence-based information in infection prevention and control (IPC). Material and methods. This study involved 110 queries related to IPC, validated by subject experts in IPC. The responses from ChatGPT were evaluated using Bloom’s taxonomy by experienced microbiologists. The scores were divided as 4 as a good response. Statistical analysis was done by correlation coefficient and Cohen’s Kappa. Results. The overall score was 4.33 (95% CI, q1 3.65–q3 4.64) indicating ChatGPT’s substantial IPC knowledge. A good response (i.e.>4 score) was found in 70 (63.6%) questions, while in 10 (9%) questions, it showed a poor response. The poor response was seen in needle stick injury and personal protective equipment (PPE) doffing-related questions. The overall correlations were found to be significant. Cohen’s Kappa confirmed moderate to substantial agreement between evaluators. Conclusion. ChatGPT demonstrated a commendable understanding of IPC principles in various domains and the study identifies specific instances where the model may require further refinement especially in critical scenarios such as needlestick injuries and PPE doffing.engAttribution-NonCommercial-NoDerivs 3.0 Polandhttp://creativecommons.org/licenses/by-nc-nd/3.0/pl/artificial intelligenceChatGPTinfection controllarge language modelmedical educationChatGPT in hospital infection prevention and control – assessing knowledge of an AI model based on a validated questionnairearticle10.15584/ejcem.2024.2.192544-1361