Publication
ChatGPT versus human in generating medical graduate exam questions – An international prospective study
Publisher:
Cold Spring Harbor Laboratory
Date:
16-05-2023
DOI:
10.1101/2023.05.13.23289943
Abstract: This is a prospective study on the quality of multiple-choice questions (MCQs) generated by the language model ChatGPT for the use in medical graduate examination. 50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison’s, and Bailey & Love’s). Another 50 MCQs were drafted by two university professoriate staffs using the same medical textbooks. All 100 MCQ were in idually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination. The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds while it took two human examiners a total of 211 minutes 33 seconds for drafting the 50 questions. When a comparison of the mean score was made between the questions constructed by AI with those drafted by human, only in the relevance domain that the AI was inferior to human (AI: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52 p = 0.04). There was no significant difference in question quality between questions drafted by AI versus human, in the total assessment score as well as in other domains. Questions generated by AI yielded a wider range of scores while those created by human were consistent and within a narrower range. ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examination within a significantly shorter time.