Displaying all 2 publications

Abstract:
Sort:
  1. Gan RK, Uddin H, Gan AZ, Yew YY, González PA
    Sci Rep, 2023 Nov 21;13(1):20350.
    PMID: 37989755 DOI: 10.1038/s41598-023-46986-0
    Since its initial launching, ChatGPT has gained significant attention from the media, with many claiming that ChatGPT's arrival is a transformative milestone in the advancement of the AI revolution. Our aim was to assess the performance of ChatGPT before and after teaching the triage of mass casualty incidents by utilizing a validated questionnaire specifically designed for such scenarios. In addition, we compared the triage performance between ChatGPT and medical students. Our cross-sectional study employed a mixed-methods analysis to assess the performance of ChatGPT in mass casualty incident triage, pre- and post-teaching of Simple Triage And Rapid Treatment (START) triage. After teaching the START triage algorithm, ChatGPT scored an overall triage accuracy of 80%, with only 20% of cases being over-triaged. The mean accuracy of medical students on the same questionnaire yielded 64.3%. Qualitative analysis on pre-determined themes on 'walking-wounded', 'respiration', 'perfusion', and 'mental status' on ChatGPT showed similar performance in pre- and post-teaching of START triage. Additional themes on 'disclaimer', 'prediction', 'management plan', and 'assumption' were identified during the thematic analysis. ChatGPT exhibited promising results in effectively responding to mass casualty incident questionnaires. Nevertheless, additional research is necessary to ensure its safety and efficacy before clinical implementation.
  2. Gan RK, Ogbodo JC, Wee YZ, Gan AZ, González PA
    Am J Emerg Med, 2024 Jan;75:72-78.
    PMID: 37967485 DOI: 10.1016/j.ajem.2023.10.034
    AIM: The objective of our research is to evaluate and compare the performance of ChatGPT, Google Bard, and medical students in performing START triage during mass casualty situations.

    METHOD: We conducted a cross-sectional analysis to compare ChatGPT, Google Bard, and medical students in mass casualty incident (MCI) triage using the Simple Triage And Rapid Treatment (START) method. A validated questionnaire with 15 diverse MCI scenarios was used to assess triage accuracy and content analysis in four categories: "Walking wounded," "Respiration," "Perfusion," and "Mental Status." Statistical analysis compared the results.

    RESULT: Google Bard demonstrated a notably higher accuracy of 60%, while ChatGPT achieved an accuracy of 26.67% (p = 0.002). Comparatively, medical students performed at an accuracy rate of 64.3% in a previous study. However, there was no significant difference observed between Google Bard and medical students (p = 0.211). Qualitative content analysis of 'walking-wounded', 'respiration', 'perfusion', and 'mental status' indicated that Google Bard outperformed ChatGPT.

    CONCLUSION: Google Bard was found to be superior to ChatGPT in correctly performing mass casualty incident triage. Google Bard achieved an accuracy of 60%, while chatGPT only achieved an accuracy of 26.67%. This difference was statistically significant (p = 0.002).

Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links