Displaying all 3 publications

Abstract:
Sort:
  1. Gan RK, Sanchez Martinez A, Abu Hasan MA, Castro Delgado R, Arcos González P
    J Ultrasound, 2023 Jun;26(2):343-353.
    PMID: 36694072 DOI: 10.1007/s40477-022-00761-5
    INTRODUCTION: Necrotizing fasciitis (NF) is a rapidly progressive necrosis of the fascial layer with a high mortality rate. It is a life-threatening medical emergency that requires urgent treatment. Lack of skin finding in NF made diagnosis difficult and required a high clinical index of suspicion. The use of ultrasound may guide clinicians in improving diagnostic speed and accuracy, thus leading to improved management decisions and patient outcomes. This literature search aims to review the use of point-of-care ultrasonography in diagnosing necrotizing fasciitis.

    METHOD: We searched relevant electronic databases, including PUBMED, MEDLINE, and SCOPUS, and performed a systematic review. Keywords used were "necrotizing fasciitis" or "necrotising fasciitis" or "necrotizing soft tissue infections" and "point-of-care ultrasonography" "ultrasonography" or "ultrasound". No temporal limitation was set. An additional search was performed via google scholar, and the top 100 entry was screened.

    RESULTS: Among 540 papers screened, only 21 were related to diagnosing necrotizing fasciitis using ultrasonography. The outcome includes three observational studies, 16 case reports, and two case series, covering the period from 1976 to 2022.

    CONCLUSION: Although the use of ultrasonography in diagnosing NF was published in several papers with promising results, more studies are required to investigate its diagnostic accuracy and potential to reduce time delay before surgical intervention, morbidity, and mortality.

  2. Gan RK, Uddin H, Gan AZ, Yew YY, González PA
    Sci Rep, 2023 Nov 21;13(1):20350.
    PMID: 37989755 DOI: 10.1038/s41598-023-46986-0
    Since its initial launching, ChatGPT has gained significant attention from the media, with many claiming that ChatGPT's arrival is a transformative milestone in the advancement of the AI revolution. Our aim was to assess the performance of ChatGPT before and after teaching the triage of mass casualty incidents by utilizing a validated questionnaire specifically designed for such scenarios. In addition, we compared the triage performance between ChatGPT and medical students. Our cross-sectional study employed a mixed-methods analysis to assess the performance of ChatGPT in mass casualty incident triage, pre- and post-teaching of Simple Triage And Rapid Treatment (START) triage. After teaching the START triage algorithm, ChatGPT scored an overall triage accuracy of 80%, with only 20% of cases being over-triaged. The mean accuracy of medical students on the same questionnaire yielded 64.3%. Qualitative analysis on pre-determined themes on 'walking-wounded', 'respiration', 'perfusion', and 'mental status' on ChatGPT showed similar performance in pre- and post-teaching of START triage. Additional themes on 'disclaimer', 'prediction', 'management plan', and 'assumption' were identified during the thematic analysis. ChatGPT exhibited promising results in effectively responding to mass casualty incident questionnaires. Nevertheless, additional research is necessary to ensure its safety and efficacy before clinical implementation.
  3. Gan RK, Ogbodo JC, Wee YZ, Gan AZ, González PA
    Am J Emerg Med, 2024 Jan;75:72-78.
    PMID: 37967485 DOI: 10.1016/j.ajem.2023.10.034
    AIM: The objective of our research is to evaluate and compare the performance of ChatGPT, Google Bard, and medical students in performing START triage during mass casualty situations.

    METHOD: We conducted a cross-sectional analysis to compare ChatGPT, Google Bard, and medical students in mass casualty incident (MCI) triage using the Simple Triage And Rapid Treatment (START) method. A validated questionnaire with 15 diverse MCI scenarios was used to assess triage accuracy and content analysis in four categories: "Walking wounded," "Respiration," "Perfusion," and "Mental Status." Statistical analysis compared the results.

    RESULT: Google Bard demonstrated a notably higher accuracy of 60%, while ChatGPT achieved an accuracy of 26.67% (p = 0.002). Comparatively, medical students performed at an accuracy rate of 64.3% in a previous study. However, there was no significant difference observed between Google Bard and medical students (p = 0.211). Qualitative content analysis of 'walking-wounded', 'respiration', 'perfusion', and 'mental status' indicated that Google Bard outperformed ChatGPT.

    CONCLUSION: Google Bard was found to be superior to ChatGPT in correctly performing mass casualty incident triage. Google Bard achieved an accuracy of 60%, while chatGPT only achieved an accuracy of 26.67%. This difference was statistically significant (p = 0.002).

Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links