TY - JOUR
T1 - High level of correspondence across different news domain quality rating sets
AU - Lin, Hause
AU - Lasser, Jana
AU - Lewandowsky, Stephan
AU - Cole, Rocky
AU - Gully, Andrew
AU - Rand, David G.
AU - Pennycook, Gordon
N1 - Funding Information:
H.L. acknowledges funding from the Social Sciences and Humanities Research Council of Canada (756-2022-0220). J.L. was supported by the Horizons 2020 Excellent Science Marie Sklodowska-Curie Actions grant no. 101026507. S.L. received funding from the Humboldt Foundation in Germany and was supported by an ERC Advanced Grant (PRODEMINFO). S.L. also received funding from the Volkswagen Foundation, under grant “Reclaiming individual autonomy and democratic discourse online.” D.G.R. and G.P. acknowledge funding from the John Templeton Foundation, the Social Sciences and Humanities Research Council of Canada, and Google. D.G.R. and G.P. have received research funding from Google and Meta. S.L. has received funding from Google/Jigsaw.
Publisher Copyright:
© The Author(s) 2023.
PY - 2023/9/1
Y1 - 2023/9/1
N2 - One widely used approach for quantifying misinformation consumption and sharing is to evaluate the quality of the news domains that a user interacts with. However, different media organizations and fact-checkers have produced different sets of news domain quality ratings, raising questions about the reliability of these ratings. In this study, we compared six sets of expert ratings and found that they generally correlated highly with one another. We then created a comprehensive set of domain ratings for use by the research community (github.com/hauselin/domain-quality-ratings), leveraging an ensemble “wisdom of experts” approach. To do so, we performed imputation together with principal component analysis to generate a set of aggregate ratings. The resulting rating set comprises 11,520 domains-the most extensive coverage to date-and correlates well with other rating sets that have more limited coverage. Together, these results suggest that experts generally agree on the relative quality of news domains, and the aggregate ratings that we generate offer a powerful research tool for evaluating the quality of news consumed or shared and the efficacy of misinformation interventions.
AB - One widely used approach for quantifying misinformation consumption and sharing is to evaluate the quality of the news domains that a user interacts with. However, different media organizations and fact-checkers have produced different sets of news domain quality ratings, raising questions about the reliability of these ratings. In this study, we compared six sets of expert ratings and found that they generally correlated highly with one another. We then created a comprehensive set of domain ratings for use by the research community (github.com/hauselin/domain-quality-ratings), leveraging an ensemble “wisdom of experts” approach. To do so, we performed imputation together with principal component analysis to generate a set of aggregate ratings. The resulting rating set comprises 11,520 domains-the most extensive coverage to date-and correlates well with other rating sets that have more limited coverage. Together, these results suggest that experts generally agree on the relative quality of news domains, and the aggregate ratings that we generate offer a powerful research tool for evaluating the quality of news consumed or shared and the efficacy of misinformation interventions.
KW - fact-checking
KW - journalism standards
KW - misinformation
KW - news quality
UR - http://www.scopus.com/inward/record.url?scp=85172355490&partnerID=8YFLogxK
U2 - 10.1093/pnasnexus/pgad286
DO - 10.1093/pnasnexus/pgad286
M3 - Article
C2 - 37719749
AN - SCOPUS:85172355490
SN - 2752-6542
VL - 2
JO - PNAS Nexus
JF - PNAS Nexus
IS - 9
M1 - pgad286
ER -