How does artificial intelligence master urological board examinations? A comparative analysis of different Large Language Models' accuracy and reliability in the 2022 In-Service Assessment of the European Board of Urology

L Kollitsch, K Eredics, M Marszalek, M Rauchenwald, SD Brookman-May, M Burger, K Körner-Riffard, M May

Research output: Contribution to journalOriginal Articlepeer-review

11 Citations (Web of Science)

Abstract

Purpose This study is a comparative analysis of three Large Language Models (LLMs) evaluating their rate of correct answers (RoCA) and the reliability of generated answers on a set of urological knowledge-based questions spanning different levels of complexity.Methods ChatGPT-3.5, ChatGPT-4, and Bing AI underwent two testing rounds, with a 48-h gap in between, using the 100 multiple-choice questions from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA). For conflicting responses, an additional consensus round was conducted to establish conclusive answers. RoCA was compared across various question complexities. Ten weeks after the consensus round, a subsequent testing round was conducted to assess potential knowledge gain and improvement in RoCA, respectively.Results Over three testing rounds, ChatGPT-3.5 achieved RoCa scores of 58%, 62%, and 59%. In contrast, ChatGPT-4 achieved RoCA scores of 63%, 77%, and 77%, while Bing AI yielded scores of 81%, 73%, and 77%, respectively. Agreement rates between rounds 1 and 2 were 84% (kappa = 0.67, p < 0.001) for ChatGPT-3.5, 74% (kappa = 0.40, p < 0.001) for ChatGPT-4, and 76% (kappa = 0.33, p < 0.001) for BING AI. In the consensus round, ChatGPT-4 and Bing AI significantly outperformed ChatGPT-3.5 (77% and 77% vs. 59%, both p = 0.010). All LLMs demonstrated decreasing RoCA scores with increasing question complexity (p < 0.001). In the fourth round, no significant improvement in RoCA was observed across all three LLMs.Conclusions The performance of the tested LLMs in addressing urological specialist inquiries warrants further refinement. Moreover, the deficiency in response reliability contributes to existing challenges related to their current utility for educational purposes.
Original languageEnglish
Article number20
Number of pages10
JournalWORLD JOURNAL OF UROLOGY
Volume42
Issue number1
DOIs
Publication statusPublished - 10 Jan 2024

Keywords

  • AI
  • LLM
  • ChatGPT-3.5
  • ChatGPT-4
  • BING AI
  • Medical exam
  • ISA
  • EBU
  • Urology exam
  • Pass mark

Fingerprint

Dive into the research topics of 'How does artificial intelligence master urological board examinations? A comparative analysis of different Large Language Models' accuracy and reliability in the 2022 In-Service Assessment of the European Board of Urology'. Together they form a unique fingerprint.

Cite this