A Novel Crowdsourcing Approach for Improving the Information Ecosystem
Given the explosion of news transmitted to, and shared by, consumers across different media, the veracity of information is of critical importance. However, the scale of existing fact-checking organizations is limited, hence resulting in a scant proportion of news articles being fact-checked. We address the challenge of scaling up fact-checking operations in the domain of science-related articles by proposing and testing a novel crowdsourcing solution. A key challenge with asking lay consumers to rate the veracity of scientific news articles is that they are likely to be biased by their prior beliefs. Using articles that have been rated for veracity by scientists as a starting point, we overcome this bias by proposing the use of crowdsourced similarity ratings rather than veracity ratings. We find that asking lay consumers to rate the similarity between scientist-rated and unrated articles provides an unbiased, effective, and efficient way to scale up veracity ratings of scientific articles. Our proposed method (human similarity-judgments) outperforms algorithm-rated similarity (e.g., by TF-IDF, Word2Vec, and BERT) to predict more accurately an article’s scientific veracity. Our alternative is also superior to previous approaches for judging veracity—such as utilizing the semantic markers of false news previously detected by previous research. We further compute a “transitivity index” to identify consumers likely to be more accurate at making similarity judgments and show how veracity predictions can be improved by paying close attention to consumer segments recruited for the similarity-judgment task. We demonstrate that our method can predict the scientific veracity of articles with over 95% accuracy and that both type-I and type-II errors are minimized. Lastly, we find preliminary evidence that the advantage of using similarity judgments comes from the fact that consumers are more likely to distance themselves from the arguments being rated. We also discuss the limitations of this method (e.g., topic level granularity) and further research (e.g., increase trust by involving consumers in fact-checking).
If you wish to attend, please contact email@example.com