Abstract: Misinformation spreads quickly through social media platforms, but how can it be stopped? Can we really ask government or technology companies to arbitrate what is true and what is false? Is algorithmic or human “fact-checking” feasible or even desirable? Recent research suggests that this may not be the way forward: simply prompting users to pay a bit more attention before sharing may be enough to curb the spread of misinformation.
Platforms such as Facebook or X (formerly known as Twitter) are now where most people gather political information. Social networks transformed from places in which you keep in contact with other people, to places where users consume large amounts of information, often from sources that are not part of their direct circle of “friends.”
This is very relevant as platforms try to monetize our time serving us with feeds that interest us most (and that keep us on their platform the longest) and as different actors may try to use platforms to influence people's behavior, on what we buy, on who we vote for.
The spread of misinformation (or “fake news,” using a popular shortcut) and the use of platforms to influence political events has been at the center of political debate for a few years, and both world-level events such as the re-election of Donald Trump or more regional events such as the disputed elections in Romania beg us to understand how to tackle misinformation. There is pressure to temper freedom of speech in a media market that is now mostly decentralized and atomized.
The European Union, possibly unsurprisingly, has taken the regulatory route, through the implementation of the Digital Services Act. The United States has in its constitution stronger provisions for free speech that prevent the government from intervening too directly in matters.
Understanding how misinformation spreads and how to combat it is crucial, and two recent studies offer valuable insights: Guriev et al. (2023) and Pennycook and Rand (2022). These studies, while employing different methodologies, paint a compelling picture of the problem and illuminate potential solutions.
Pennycook and Rand (2022) approach the problem from a cognitive psychology perspective. Their research emphasizes the crucial role of cognitive biases and limited critical thinking skills in the spread of misinformation. They argue that many individuals lack the necessary skills to evaluate information sources and identify falsehoods effectively. This is not simply a matter of malicious intent, but rather a consequence of cognitive shortcuts and motivated reasoning – our brains' tendency to favor information confirming existing beliefs.
Their work highlights the limitations of simple fact-checking approaches. They found that presenting people with corrective information often backfires, strengthening rather than weakening pre-existing beliefs. This phenomenon, known as the "backfire effect," presents a significant hurdle in misinformation control.
Presenting people with corrective information may also be complicated, as it is not clear who should be in charge of deciding when a piece of information is misleading, and what further elements are supposed to be added to avoid misinformation. No matter how fashionable “fact-checking” has become, it is easy to argue that even fact-checkers may have their own ideological biases and agendas.
Pennycook and Rand test the effect of giving “accuracy prompts” to people sharing information on social media, in which users are simply nudged to “think twice” before sharing. In their experiments, they find that this has an important effect in decreasing the amount of false information being shared online by subjects, without decreasing the likelihood of sharing correct information. This, in turn, improves the overall quality of information being shared. This “nudging” to think twice is particularly interesting as it does not require (algorithmic or human) fact-checking, avoiding the “backfire effect” and also any discussion on how to truly “fact check” information. This behavior is triggered equally on subjects with different political leanings and therefore should be neutral on the overall ideological slant of the online conversation.
Guriev et al. (2023) did a similar job, conducting a large-scale experiment during the 2022 US midterm elections, using Twitter as a real-world laboratory. They presented participants with tweets containing either accurate or false information about political issues. The researchers then implemented several interventions designed to curb the spread of misinformation. These interventions included: making the retweet process slightly more cumbersome (“extra click”), priming subjects with a simple warning message before sharing, offering a fact check, or encouraging participants to evaluate the accuracy and bias of the tweets before sharing. As shown in Figure 1, all interventions reduced the sharing of false news, but the most effective was the simple "priming fake news circulation" message. This happened to a decreased rate of retweeting of false news, with no effect on accurate information.

This finding aligns perfectly with the central argument of Pennycook and Rand (2022), already discussed.
The reasons why people tend to spread less misinformation could be various. People may retweet for different motives: they may want to persuade others, they may want to signal their political affiliation, but they may also be interested in their reputation as a trustworthy source of information. In this sense, the decision to retweet something depends both on the ideological leaning of the news (and the person!) and on its veracity. Working through these channels, priming people on the risks of spreading misinformation seems to have a positive effect on their retweeting patterns.
The findings from both of the research papers point towards fairly simple interventions. Most of the political debate on “fake news” and “fact-checking” calls for different actors to directly intervene in the news market, with people across the political spectrum advocating either in favor or against technology companies or governments making the call on what is true or false, either through technology (“algorithmic fact-checking”) or through supposedly trustworthy agencies.
The experimental results from these research papers tell us that we do not need to take a stance in this debate, as it is much more effective to simply curb the mindless sharing of information through small hurdles in the sharing process or through “priming” users to the importance on not spreading misinformation. Of course, it is not clear whether these kinds of prompts are going to be in the interest of platforms, as they may end up reducing engagement and therefore profits.
The fight against misinformation is a complex challenge, but the combined insights from Guriev et al. (2023) and Pennycook and Rand (2022) illuminate potential pathways forward. By focusing on both short-term, low-cost interventions and long-term efforts to improve critical thinking skills and media literacy, the information ecosystem may improve with no need for human or technological arbitration on what is true or false.
Bibliography:
Guriev, S, E Henry, T Marquis, and E Zhuravskaya (2023), “Curtailing False News, Amplifying Truth,” CEPR Discussion Paper No. 18650.
Pennycook, G and D Rand (2022), “Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation,” Nature Communications 13(2333).
Prof. Emanuele Bracco
Associate Professor of Economics
Dipartimento di Scienze Economiche
Università di Verona, Italy