Imperial Business Partners: Tech Digest

Imperial Business Partners Gain the edge

Fact or fiction: A quest for objectivity in an age of fake news and misinformation

By Dr Julio Amador Diaz Lopez, Imperial College Business School

Even if the term fake news reached the mainstream in the 2016 electoral campaign in the United States, the phenomenon - and in particular worries about how agents strategically act to influence people’s beliefs and perceptions - have been present every time a new technological breakthrough in communications emerges (Allcott and Gentzkow, 2017; Pennycook and Rand, 2017).

For instance, the introduction of cheap printing presses and advertising business models allowed newspapers to increase their reach dramatically (Allcott and Gentzkow, 2017). Partisans, ideologues, and some ill-intentioned “entrepreneurs” were among the beneficiaries that, by adopting such innovations and strategically promoting sensational stories, managed to increase sales. 

The difficulty stems, in part, from humans’ limitations in detecting fake information. A meta-analysis of over 200 experiments (Bond and DePaulo, 2006) finds that humans perform only 4% better than chance at detecting lies. Adding to these problems come differences in legislation.

Countries like the United States severely constrain the government from imposing any limitation to freedom of speech, whereas others (e.g., United Kingdom and France) allow the government to act in specifically defined cases such as national security (Sanger).

The five areas of interdisciplinary research

Regardless of these difficulties, a recent survey on deceptive information and rumour detection (Rubin, 2017) suggests interdisciplinary research has focused its attention on five different areas:

  1. Deceptive information - intentionally and knowingly attempting to create false beliefs on the part of the receiver (Buller and Burgoon, 1996; Zhou et al., 2004). It is worth noting that most definitions of what is commonly understood as fake news fall within the set of deceptive information.
  2. Rumours - unverified assertions that start from one or more sources and spread over time (Vosoughi et al., 2018). Rumours may or may not be intended as deceptions as their defining feature is the lack of verifiability at the time of dissemination.
  3. Subjectivity of information - relates to potential human biases in deceptive information.
  4. Truthfulness of information - Is information false? It is possible that such a distinction is not achievable. Hence truthfulness is also related to the degree of credibility of information.
  5. Misinformation is commonly used to encompass all of the above as it abstracts the intention or truth condition of information.

The recent explosion in the usage of social networking sites has complicated the task of detecting misinformation. Agents looking to deceive react almost immediately to automatic detection systems, thus posing a challenge for practitioners and researchers alike. Most research efforts have concentrated on answering the following questions:

  • Is it possible to find characteristic features that differentiate deceptive information from others?
  • And, if so, can we use such features to facilitate flagging different degrees of deception?

Within these areas, much of the research aims to develop methods to detect misinformation and deceptive news automatically. Two approaches have particularly proven the most successful: linguistic and network-based. The former has leveraged deceivers’ language leakages to detect lies.

This technique is based on the assumption that liars carefully select the language to avoid being caught (Feng and Hirst, 2018). However, these lead to unintended mistakes that automatic detection systems could use to spot the piece of misinformation. Standard linguistic methods for detecting misinformation include statistical analysis of word representations, syntax, semantic and discourse analysis (Rubin, 2017).

Additionally, statistical analysis is frequently paired with machine learning classifiers, e.g. Support Vector Machines (SVM) and Naive Bayes (Zhang et al., 2012; Oraby et al., 2017). Machine learning classification has proven particularly successful in detecting deceptive information, analysing subjectivity and truthfulness of information, with accuracy ranging between 80% and 90% (Rubin et al., 2015).

However, the latter is heavily dependent on past data and, as such, may not generalise well. Furthermore, research has shown that some linguistic patterns are difficult to monitor or, with the use of new technologies such as adversarial networks, naturally used linguistic patterns are increasingly easy to mimic.

Network-based approaches for automatic misinformation detection

On the other end, network approaches make use of network properties and meta-data to detect misinformation automatically. As such, these methods have been usually used as complements to linguistic approaches. 

A commonly used formula is to pair knowledge graphs and social network behaviours. The former makes use of relatedness of the information presented to statements of the “known world”.

In that way, the closer a piece of information is to the known state of the world, the likelier it is to be true. Published work using these methods has achieved between 61% and 95% accuracy (Ciampaglia et al., 2015). The latter looks at meta-data embedded within online social networks to detect behaviours that look to influence perceptions.

For instance, the inclusion of hyperlinks, connections to others, media elements, and information diffusion are helpful features to detect misinformation (Vosoughi et al., 2018; Monti et al., 2019). The deceptive nature of fake news coupled with the pace and scale of social media suggest network-based methods are better suited for the problem at hand.

The reason is that linguistic-based classifiers might not generalise to the fast-changing social media contexts and from one country to another despite speaking the same language (e.g. U.S. versus Great-Britain).

Recent challenges for misinformation detection

The past couple of years have proven to be challenging for those involved in detecting misinformation. The rise of new technologies such as ‘deep fakes’ leverage knowledge from the above-described systems to adversarially learn (i.e., by confronting fake results to those generated by the automatic detection systems) how to trick them. In addition to new technologies, the rise of nationalism has opened the gates to a new world of hyper-partisan misinformation.

Events that draw global attention, such as the COVID pandemic, help misinformation travel faster and wider. For example, in our research, we have identified widespread misinformation campaigns aiming to convince the public that American operatives seeded the new coronavirus.

At the same time, we have also seen coordinated movements in the western world trying to persuade the public that China created and purposely let the virus run amok. This sort of misinformation has the power to damage corporations as well as nations. Closer to home, we have seen vaccine-related misinformation damage the reputation of perfectly good vaccines and manufacturers.

When combined with ideology, this misinformation is dangerous for pharmaceutical companies and public health outcomes. Clear examples can be found in France, where senior leadership cast the Astra-Zeneca vaccine as unsafe, increasing vaccine hesitancy in an already hesitant country.

More effective tools in the face of global misinformation activities

Despite all of this, past years have also increased research and industry development of tools that help mitigate these trends. Notable examples have been developing tools to empower fact-checkers worldwide.

For instance, a Spanish startup named Newtral has developed tools that enable fact-checkers to leverage past information from different knowledge graphs to make their fact-checking process more efficient. Closer to Imperial was Twitter’s acquisition of FabulaAI, a startup that used new network-based technologies to identify patterns of diffusion to detect potential misinformation.

But, perhaps the most critical development has been the realisation by the civil society of the scale of the problem.

Particularly relevant are the efforts in misinformation literacy to educate people on how to spot questionable pieces of information.

The dynamics between agents looking to spread misinformation and those aiming to stop it, together with the rapid technological and political developments, suggest the next years will be turbulent.

The insurrection in the capital of the United States on January 6 is probably the best example of what can happen when technology-powered misinformation meets political polarisation.

But, in the same way the January 6 insurrection was controlled, it is quite likely this era of turbulence will also recede, opening the path to new, more robust democracies.

Contact the IBP team for more information

References

Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236. 

Bond, C. F., & DePaulo, B. M. (2006). Accuracy of Deception Judgments. Personality and Social Psychology Review, 10(3), 214–234. 

Buller, D. B., & Burgoon, J. K. (1996). Interpersonal Deception Theory. Communication Theory, 6(3), 203–242. 

Ciampaglia, G. L., Shiralkar, P., Rocha, L. M., Bollen, J., Menczer, F., & Flammini, A. (2015). Computational fact checking from knowledge networks

Feng, V. W., & Hirst, G. (2013). Detecting deceptive opinions with profile compatibility.

Monti, F., Frasca, F., Eynard, D., Mannion, D., & Bronstein, M. M. (2019). Fake News Detection on Social Media using Geometric Deep Learning

Oraby, S., Reed, L., Compton, R., Riloff, E., Walker, M., & Whittaker, S. (2017). And That’s A Fact: Distinguishing Factual and Emotional Argumentation in Online Dialogue

Pennycook, G., & Rand, D. G. (2017, September 12). Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity

Rubin, V. L., Chen, Y., & Conroy, N. J. (2015). Deception detection for news: Three types of fakes. Proceedings of the Association for Information Science and Technology, 52(1), 1–4. 

Rubin, V. (2017). Deception Detection and Rumor Debunking for Social Media. FIMS Publications

Sanger, D. E. (n.d.). The perfect weapon : war, sabotage, and fear in the cyber age.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science (New York, N.Y.), 359(6380), 1146–1151. 

Zhang, H., Fan, Z., Zheng, J., & Liu, Q. (2012). An Improving Deception Detection Method in Computer-Mediated Communication. Journal of Networks, 7(11). 

Zhou, L., Burgoon, J. K., Nunamaker, J. F., & Twitchell, D. (2004). Automating Linguistics-Based Cues for Detecting Deception in Text-Based Asynchronous Computer-Mediated Communications. Group Decision and Negotiation, 13(1), 81–106.