Displaying all 3 publications

Abstract:
Sort:
  1. Harel D, Wu Y, Levis B, Fan S, Sun Y, Xu M, et al.
    J Affect Disord, 2024 Sep 15;361:674-683.
    PMID: 38908554 DOI: 10.1016/j.jad.2024.06.033
    Administration mode of patient-reported outcome measures (PROMs) may influence responses. We assessed if Patient Health Questionnaire-9 (PHQ-9), Edinburgh Postnatal Depression Scale (EPDS) and Hospital Anxiety and Depression Scale - Depression subscale (HADS-D) item responses and scores were associated with administration mode. We compared (1) self-administration versus interview-administration; within self-administration (2) research or medical setting versus private; and (3) pen-and-paper versus electronic; and within interview-administration (4) in-person versus phone. We analysed individual participant data meta-analysis datasets with item-level data for the PHQ-9 (N = 34,529), EPDS (N = 16,813), and HADS-D (N = 16,768). We used multiple indicator multiple cause models to assess differential item functioning (DIF) by administration mode. We found statistically significant DIF for most items on all measures due to large samples, but influence on total scores was negligible. In 10 comparisons conducted across the PHQ-9, EPDS, and HADS-D, Pearson's correlations and intraclass correlation coefficients between latent depression symptom scores from models that did or did not account for DIF were between 0.995 and 1.000. Total PHQ-9, EPDS, and HADS-D scores did not differ materially across administration modes. Researcher and clinicians who evaluate depression symptoms with these questionnaires can select administration methods based on patient preferences, feasibility, or cost.
  2. Levis B, Bhandari PM, Neupane D, Fan S, Sun Y, He C, et al.
    JAMA Netw Open, 2024 Nov 04;7(11):e2429630.
    PMID: 39576645 DOI: 10.1001/jamanetworkopen.2024.29630
    IMPORTANCE: Test accuracy studies often use small datasets to simultaneously select an optimal cutoff score that maximizes test accuracy and generate accuracy estimates.

    OBJECTIVE: To evaluate the degree to which using data-driven methods to simultaneously select an optimal Patient Health Questionnaire-9 (PHQ-9) cutoff score and estimate accuracy yields (1) optimal cutoff scores that differ from the population-level optimal cutoff score and (2) biased accuracy estimates.

    DESIGN, SETTING, AND PARTICIPANTS: This study used cross-sectional data from an existing individual participant data meta-analysis (IPDMA) database on PHQ-9 screening accuracy to represent a hypothetical population. Studies in the IPDMA database compared participant PHQ-9 scores with a major depression classification. From the IPDMA population, 1000 studies of 100, 200, 500, and 1000 participants each were resampled.

    MAIN OUTCOMES AND MEASURES: For the full IPDMA population and each simulated study, an optimal cutoff score was selected by maximizing the Youden index. Accuracy estimates for optimal cutoff scores in simulated studies were compared with accuracy in the full population.

    RESULTS: The IPDMA database included 100 primary studies with 44 503 participants (4541 [10%] cases of major depression). The population-level optimal cutoff score was 8 or higher. Optimal cutoff scores in simulated studies ranged from 2 or higher to 21 or higher in samples of 100 participants and 5 or higher to 11 or higher in samples of 1000 participants. The percentage of simulated studies that identified the true optimal cutoff score of 8 or higher was 17% for samples of 100 participants and 33% for samples of 1000 participants. Compared with estimates for a cutoff score of 8 or higher in the population, sensitivity was overestimated by 6.4 (95% CI, 5.7-7.1) percentage points in samples of 100 participants, 4.9 (95% CI, 4.3-5.5) percentage points in samples of 200 participants, 2.2 (95% CI, 1.8-2.6) percentage points in samples of 500 participants, and 1.8 (95% CI, 1.5-2.1) percentage points in samples of 1000 participants. Specificity was within 1 percentage point across sample sizes.

    CONCLUSIONS AND RELEVANCE: This study of cross-sectional data found that optimal cutoff scores and accuracy estimates differed substantially from population values when data-driven methods were used to simultaneously identify an optimal cutoff score and estimate accuracy. Users of diagnostic accuracy evidence should evaluate studies of accuracy with caution and ensure that cutoff score recommendations are based on adequately powered research or well-conducted meta-analyses.

  3. Gopalakrishna G, Langendam M, Scholten R, Bossuyt P, Leeflang M, Noel-Storr A, et al.
    Diagn Progn Res, 2017;1:11.
    PMID: 31095132 DOI: 10.1186/s41512-017-0011-4
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.].
Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links