False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice
Khandoozi (Khan), Seyedali (Ali)
Misinformation , Disinformation , Algorithmic Advice , Algorithm Appreciation
Across three conceptual and empirical studies, we respond to urgent calls from all corners of society to address the wicked and universal problem of false messages circulating on the Internet, better known as “fake news”. In the first paper, we problematize the use of the term “fake news”, highlighting issues around its different meanings in prior academic research, its current meaning in the vernacular, and its adequacy to cover the entirety of the phenomena of online falsehoods. Building on existing attempts at addressing the conceptualization problem, we offer our own solution based on the literature on the ontology of digital objects, proposing the concept of “false messages”, of which “false news” is a subset. We also situate this new concept in its broader technical and social context. In the second paper, we shift our attention to mitigating the problem of false news and compare the effects of two algorithmic advisors on individuals’ judgment about news facticity. A large number of algorithms are being developed to identify false news based on the content of news articles (content-based algorithms) or social reaction to news articles (social-based algorithms), which we argue, can act as algorithmic advisors to humans about news facticity. Based on the theory of technology dominance (TTD), Judge-Advisor System (JAS) studies, and computers are social actors (CASA) paradigm, we hypothesize and find some empirical evidence that content-based and social-based algorithmic advisors differ in their ability to influence individuals’ judgments about news facticity. In the final paper, we compare two algorithmic advisors that differ in their source of training data, with one advisor trained using data from a fact-checker with liberal political attitudes and the other trained with data from a fact-checker with conservative political attitudes. Extending the TTD by linking it to similarity-attraction studies, we find different patterns of advice taking from the two algorithmic advisors among US-based Democrats, Republicans, and independents, with Democrats utilizing advice from the algorithmic advisor with liberal training data and Republicans not utilizing advice from either algorithmic advisor, while independents utilized advice from the liberal algorithmic advisor with more nuances compared to the Democrats.