METHODS: A systematic search of PubMed, PsycInfo, and Embase was conducted. Studies published between 2007 and 2022 were included if they reported treatment rates among college students with mental health problems, stratified by sex, gender, race-ethnicity, sexual orientation, student type, student year, or student status. Random-effects models were used to calculate pooled prevalence ratios (PRs) of having a perceived need for treatment and of receiving treatment for each sociodemographic subgroup.
RESULTS: Twenty-one studies qualified for inclusion. Among students experiencing mental health problems, consistent and significant sociodemographic differences were identified in perceived need for treatment and treatment receipt. Students from racial-ethnic minority groups (in particular, Asian students [PR=0.49]) and international students (PR=0.63) reported lower rates of treatment receipt than White students and domestic students, respectively. Students identifying as female (sex) or as women (gender) (combined PR=1.33) reported higher rates of treatment receipt than students identifying as male or as men. Differences in perceived need appeared to contribute to some disparities; in particular, students identifying as male or as men reported considerably lower rates of perceived need than students identifying as female or as women.
CONCLUSIONS: Findings highlight the need for policy makers to address barriers throughout the treatment-seeking pathway and to tailor efforts to student subgroups to reduce treatment disparities.
METHODS: We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
RESULTS: 16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (-0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
CONCLUSIONS: PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
METHOD: Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
RESULTS: A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15-3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98-10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7-15) (OR = 0.96; 95% CI = 0.56-1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26-0.97).
CONCLUSIONS: The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.Declaration of interestDrs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
OBJECTIVE: To use an individual participant data meta-analysis to evaluate the accuracy of two PHQ-9 diagnostic algorithms for detecting major depression and compare accuracy between the algorithms and the standard PHQ-9 cutoff score of ≥10.
METHODS: Medline, Medline In-Process and Other Non-Indexed Citations, PsycINFO, Web of Science (January 1, 2000, to February 7, 2015). Eligible studies that classified current major depression status using a validated diagnostic interview.
RESULTS: Data were included for 54 of 72 identified eligible studies (n participants = 16,688, n cases = 2,091). Among studies that used a semi-structured interview, pooled sensitivity and specificity (95% confidence interval) were 0.57 (0.49, 0.64) and 0.95 (0.94, 0.97) for the original algorithm and 0.61 (0.54, 0.68) and 0.95 (0.93, 0.96) for a modified algorithm. Algorithm sensitivity was 0.22-0.24 lower compared to fully structured interviews and 0.06-0.07 lower compared to the Mini International Neuropsychiatric Interview. Specificity was similar across reference standards. For PHQ-9 cutoff of ≥10 compared to semi-structured interviews, sensitivity and specificity (95% confidence interval) were 0.88 (0.82-0.92) and 0.86 (0.82-0.88).
CONCLUSIONS: The cutoff score approach appears to be a better option than a PHQ-9 algorithm for detecting major depression.
METHODS: Data accrued for an IPDMA on HADS-D diagnostic accuracy were analysed. We fit binomial generalized linear mixed models to compare odds of major depression classification for the Structured Clinical Interview for DSM (SCID), Composite International Diagnostic Interview (CIDI), and Mini International Neuropsychiatric Interview (MINI), controlling for HADS-D scores and participant characteristics with and without an interaction term between interview and HADS-D scores.
RESULTS: There were 15,856 participants (1942 [12%] with major depression) from 73 studies, including 15,335 (97%) non-psychiatric medical patients, 164 (1%) partners of medical patients, and 357 (2%) healthy adults. The MINI (27 studies, 7345 participants, 1066 major depression cases) classified participants as having major depression more often than the CIDI (10 studies, 3023 participants, 269 cases) (adjusted odds ratio [aOR] = 1.70 (0.84, 3.43)) and the semi-structured SCID (36 studies, 5488 participants, 607 cases) (aOR = 1.52 (1.01, 2.30)). The odds ratio for major depression classification with the CIDI was less likely to increase as HADS-D scores increased than for the SCID (interaction aOR = 0.92 (0.88, 0.96)).
CONCLUSION: Compared to the SCID, the MINI may diagnose more participants as having major depression, and the CIDI may be less responsive to symptom severity.
OBJECTIVE: To evaluate the degree to which using data-driven methods to simultaneously select an optimal Patient Health Questionnaire-9 (PHQ-9) cutoff score and estimate accuracy yields (1) optimal cutoff scores that differ from the population-level optimal cutoff score and (2) biased accuracy estimates.
DESIGN, SETTING, AND PARTICIPANTS: This study used cross-sectional data from an existing individual participant data meta-analysis (IPDMA) database on PHQ-9 screening accuracy to represent a hypothetical population. Studies in the IPDMA database compared participant PHQ-9 scores with a major depression classification. From the IPDMA population, 1000 studies of 100, 200, 500, and 1000 participants each were resampled.
MAIN OUTCOMES AND MEASURES: For the full IPDMA population and each simulated study, an optimal cutoff score was selected by maximizing the Youden index. Accuracy estimates for optimal cutoff scores in simulated studies were compared with accuracy in the full population.
RESULTS: The IPDMA database included 100 primary studies with 44 503 participants (4541 [10%] cases of major depression). The population-level optimal cutoff score was 8 or higher. Optimal cutoff scores in simulated studies ranged from 2 or higher to 21 or higher in samples of 100 participants and 5 or higher to 11 or higher in samples of 1000 participants. The percentage of simulated studies that identified the true optimal cutoff score of 8 or higher was 17% for samples of 100 participants and 33% for samples of 1000 participants. Compared with estimates for a cutoff score of 8 or higher in the population, sensitivity was overestimated by 6.4 (95% CI, 5.7-7.1) percentage points in samples of 100 participants, 4.9 (95% CI, 4.3-5.5) percentage points in samples of 200 participants, 2.2 (95% CI, 1.8-2.6) percentage points in samples of 500 participants, and 1.8 (95% CI, 1.5-2.1) percentage points in samples of 1000 participants. Specificity was within 1 percentage point across sample sizes.
CONCLUSIONS AND RELEVANCE: This study of cross-sectional data found that optimal cutoff scores and accuracy estimates differed substantially from population values when data-driven methods were used to simultaneously identify an optimal cutoff score and estimate accuracy. Users of diagnostic accuracy evidence should evaluate studies of accuracy with caution and ensure that cutoff score recommendations are based on adequately powered research or well-conducted meta-analyses.