The causal loop between information disorder and trust on the Internet

Tanja Pavleska, Laboratory for Open Systems and Networks, Jozef Stefan Institute, Ljubljana

This article explains the link between information disorder (represented by misinformation, disinformation and malinformation) and trust on the Internet, from both socio-economic and technological perspective. The topics are analysed through information/content type that is a popular online representative of the problem: the fake news phenomenon and the fact-checking initiatives trying to combat it. The article introduces a conceptual framework for reasoning about information disorder and establishes its connection to trust. Finally, it presents the key findings from an empirical study on the performance of the European fact-checking organizations and extracts important stakeholder recommendations.

INTRODUCTION

The advent of the information and communication technologies opened up a myriad of opportunities for people to create and distribute content through multiple services and platforms. However, not all actors take advantage of the bright side of the Internet. In fact, very often they create and spread (purposefully or not) content of dubious veracity or unverified origin. This type of content is what has popularly become classified as “fake news”. Fake news is often simply defined as spreading false content for political purposes. However, from a broader perspective, fake news may refer to rumours, gossip or generally, information that is dubious or completely misleading. The most controversial property of fake news is undoubtedly their potential to influence how society as a whole or groups within society behave and perceive reality. This not only impacts the quality of contents on the web, but undermines the trust of the users in the platforms, in the applications and in the other users creating and sharing content. As reported in the Reuters Institute Digital News Report [1], only a quarter (24 %) of the respondents think “social media do a good job in separating facts from fiction, compared to 40 % for the news media.”

These developments in the online world, however, do not imply that traditional media are immune to fake news reporting (Edelman Trust Barometer 2018). Media presentation of reality and journalism work have in particular been largely questioned because distrust in media as a factor for social progress is on the rise. Thus, it is of little surprise that we have witnessed the emergence of dozens of factchecking organizations in Europe over the last several years [2, 3, 4].

Although many articles and studies report on some aspects of these activities, tools, organizations and their work, there is no study, let alone a holistic one, that either determines factors for measuring performance or inspects the influence of those factors on any aspect of the performance of fact-checking efforts. Yet, the relevance of this particular issue seems to be approachable by a multitude of disciplines:

  • Economically, one can speak about the efficacy of the efforts, their social impact, return of investment, value for money, effect on consumer behaviour, risk assessments, or their contribution to the Internet and media development in general.
  • Politically, one may investigate questions like: Which entities deserve public/civil support and how is this provided in the most transparent and sustainable way? What are the practical implications of their functioning? or more specific questions, such as: What is the correlation between information disorder and the political developments in a country? How can regulation impact and be impacted by these efforts? How are fundamental rights affected by the success or failure of these initiatives?
  • Psychologically and inter-disciplinary, investigating these issues may provide deeper and novel insights into the human bias phenomena, the role of social behaviour and groupthink (echo chambers), the formation of social networks in the proliferation of a certain piece of information, the emergence and undermining of trust, etc.

Research has, nevertheless, provided arguments for negative perceptions on the general usefulness and trustworthiness of these organizations by social media users [3], mainly stemming from transparency issues.

THEORETICAL BACKGROUND

Developing and arguing over a case concerned with fake news, hoaxes, fact-checking, clickbait (monetization and traffic attraction), is often encumbered by the absence of a conceptual common ground on the concepts underlying that context. Some suggest novel terms, such as attention hacking [5]. Others prefer more general terms, such as distribution of harms, as coined by Rubin et al. in [6]. According to [5], fake news “generally refers to a wide range of disinformation and misinformation circulating online ani in the media.” In media markets’ theories fake news is defined as “distorted signals uncorrelated with the truth” that emerge in the market because it is “cheaper to provide than precise signals” [7]. From a political economy perspective, fake news has a long history that is bound to the commodification of journalism in a market economy [8]. Some researchers like Wardle and Derakhshan oppose the use of the term “fake news” per se [9]. In their view, it is a conceptually inadequate and politically abused term. In the same vein, Marwick et al. call for a larger focus on attention and frame hacking, providing a perspective that is more oriented towards data infrastructure manipulation sensitivity rather than vague discussions on veracity, truth and objectivity [5]. Therefore, Wardle and Derakhshan introduce a new conceptual framework, defining what they prefer to call the key terminology of information disorder: misinformation, disinformation and malinformation, and distinguishing between information that is false and information that is designed to harm [9]. In our study, we information disorder and adopt the following definitions:

  • Definition 1: Misinformation occurs when false information is shared, but no harm is meant.
  • Definition 2: Disinformation is when false information is knowingly shared to cause harm.
  • Definition 3: Malinformation is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.

Clearly, fake news can be of a detriment to the social momentum of the Internet. Moreover, it can equally bring harm to the public as any other type of harmful content. Despite the inconsistencies of the definitions, there seems is a consensus that the current communication environment within and between many countries worldwide is much more politically and socially challenged than in the past periods of grey and black propaganda, conspiracy theories and fabricated content. In a society where the virtual and the real world cannot be divided by a clear line, these developments directly affect the social fabric of the world democracy.

FAKE NEWS AND FACT-CHECKING

Considering the variety of stakeholders concerned by the problem, several initiatives and organizations were also established with the common objective of raising awareness and addressing challenges related to trust and truth in the digital age: the NGO First Draft (in 2016), global Partner Network of journalism (e.g. BBC, Reuters), human rights (e.g. Amnesty International) and technology (e.g. YouTube) organizations, to name a few. To join these efforts at a European level, in November 2017 the European Commission (EC) announced its next step in the fight against fake news: setting up a High-Level Expert Group and launching a public consultation. Some EU member states had already taken measures to combat information disorder. For example, the Czech Republic set up a specialist “anti fake news” police unit called Centre Against Terrorism and Hybrid Threats, which have been operating since 2017. Both Italian and Slovak police announced fight against fake news in January 2018. Simultaneously, Sweden engaged in plans to create a new public authority tasked with countering disinformation and boosting the population’s resilience in the face of possible influence operations, called “psychological defense” (psykologiskt försvar) authority. Similarly, in January 2018 the United Kingdom revealed plans to establish presence of hoax news stories online and stop social media campaigns from foreign adversaries. These initiatives have currently no special institutional or legal background. A debate could be opened on whether an effort to make them institutional would result in approval by the public and what repercussions would it have on human rights, national security and public safety.

There is no single definition of what the objectives of fact-checking organizations are, or even a single description summarizing their basic features. A variety of approaches among scientists exist to determine and describe these organizations. A recent study divides the universe of fact-checking services into three general categories based on their areas of concern: 1) political and public statements in general; 2) online rumours and hoaxes and 3) specific topics, controversies [3], particular conflicts or narrowly scoped issues and events. Most recent data counted 137 active fact-checking projects around the world up from 114 in early 2017. A third of them are located in the USA [2]. In Europe alone, 34 permanent sources of political fact-checking have been identified as active in 20 different European countries, from Ireland to Turkey [4]. These organizations are categorized in terms of their mission and their methods. By this categorization, Graves and Cherubini found that fact-checking outlets occupy a spectrum between reporters, reformers, and a third overlapping category of organizations cultivating a role of independent experts [4].

Fact checkers around the globe have also formed an entire professional network. The International Fact-Checking Network (IFCN) is a unit of the Poynter Institute dedicated to bringing together fact-checkers worldwide. The IFCN was launched in September 2015 to support fact-checking initiatives by promoting best practices and exchanges among organizations in this field. The association also adopted a Code of principles in 2016. The principles represent professional commitments to non-partisanship and fairness, transparency of sources, transparency of methodology and open and honest corrections. These comprise the principles and values on which the activities of fact-checking organizations are premised; notwithstanding the fact that these organizations are similar to journalistic and other associations (like non-governmental organizations), they have not adopted criteria for the self-assessment of their performance. Moreover, only part of the European fact-checkers joined this global network.

Analysing more in depth these organizations, some scholars explore the methodology they use. Rubin et al. provide a map of the current landscape of veracity assessment methods, their major classes and goals [6]. Two major categories of methods exist: 1. Linguistic Approaches in which the content of deceptive messages is extracted and analysed to associate language patterns with deception; and 2. Network Approaches in which network information, such as message metadata or structured knowledge network queries can be harnessed to provide aggregated deception measures. Interestingly, most of the insights on deception research originate from disciplines without detection automation in mind.

METHODOLOGY

Despite their diversity, the functional characteristics of fact-checking organizations are denoted by their names. Experts have (rightly) observed that, while mis/dis/mal- information spreading is mainly dominated by very active users, the fact-checking is still a more grass-roots activity [10]. Furthermore, one serious drawback of the fact-checking and debunking activities is related to the fact that human observers perform poorly in the detection of fake news, and machines even slightly outperform humans on certain tasks [11, 12]. The mathematical modelling of information diffusion processes showed that there is a threshold value for the fact-checking probability that guarantees the complete removal of the hoax from the network which does not depend on the spreading rate, but only on the gullibility and forgetting probability [13]. This also raises a series of fundamental issues: how efficient are the tools and platforms aimed to combat information disorder? Which factors affect their performance and how to evaluate this in the first place?

To answer to these questions, we define a methodology of work that includes: systematizing the components of a fact-checking system into a taxonomy, defining the factors that influence the performance of fact-checkers, identifying indicators to evaluate the performance, and empirically validating all of these through concrete evaluation of European fact-checking organizations. Here, we only briefly describe these methodological steps, and only devote special attention to presenting the results from the empirical study in the next section.

By drawing analogies to the computational trust systems and supporting them with relevant proofs, we determined the following three main components that any future fact-checking system needs to integrate: Information gathering, Decision-making and Response. Then, based on an extensive literature and case-studies review of approaches in a variety of contexts, we identified the following performance indicators for the operational and functional characteristics of fact-checkers: internal coordination, external coordination, tracking impact, tracking progress, clarity of objectives, accounting for transparency, self-assessment procedures and incentives policy. The analysis of the identified indicators is either implicitly or explicitly embedded into the analysis of the effectiveness and efficiency indicators whose maximization is the most desirable performance aspect. These are also the major aspects integrated into a special dedicated Questionnaire for evaluation of fact-checkers’ performance. The Questionnaire was distributed to all EU fact-checking organizations. The results and the analysis are part of the empirical study presented in the following section.

EMPIRICAL STUDY

The scientific validity of the above described methods and approaches depends on the cooperation of the target organizations and the maturity of their projects. As a primary database of fact-checking organizations we used the list compiled by Graves and Cherubini in [4] and additional extended searches. Thus, 50 European organizations were approached, located in 27 countries. The majority of the organizations were contacted through their official website or through their publicly available emails. In 12 cases, however, online form was the only available form of communication. In addition, Facebook was used to establish contact with 7 of them, appearing to be the only possible way. The period of contacting all of the organizations was throughout December 2017 and April 2018. Despite the online communication, we also asked local contacts for help in several cases, such as in Finland, Italy, Latvia, Norway, Poland and UK. The feedback rate (number of feedbacks vs. the overall number of surveys distributed), although not very high, allowed us to carry out highly relevant and statistically meaningful analysis. This paper summarises the key findings and recommendations from these performance analyses. The analysis is approached in a multidisciplinary manner and embraces the expertise and experience of people with diverse backgrounds helping to tackle the issues elaborated in this study.

A. Key findings

The activities of fact checking initiatives (FCIs) represent an unalienable part of the process of the fight against information disorder. These grass root initiatives should be supported and encouraged to evolve further.

  • The biggest reported challenges for FCIs include insufficient stakeholders’ awarness on the issue related to information disorder and lack of adequate resources. FCIs should be encouraged to publicise widely information about their activities, methods and outcomes. On the other hand, the appropriate financing of these organizations depending on their ethical and transparent behaviour should be perceived as an important element of the successful fight against fake news.
  • Although there is a clear general goal set for all of the investigated FCIs, there is a lack of clarity on the part of the organizations in the sub-goals and objectives that operation. This demands considerable improvement of the organizational culture and practices pursued by these structures.
  • The number of debunked news/hoaxes (last three-months average) varies highly across countries and is very much context dependent influenced by both the general political situation and ad hoc events (e.g. elections).
  • Majority of FC a single content type. Specific visual content (photos, YouTube), although known to have far greater impact in the proliferation of fake news than text, is addressed to a lesser extent.
  • The extent to which automated and semi-automated software are employed in these projects is very low, although there are already complete end-to-end computational fact-checking solutions available. Most of the FCIs do pay attention to revision of tools, but there is still a significant number of them that have not yet considered this option. IT companies can help in this respect providing advice and technical support to these organizations.
  • The majority of the FCIs select their target sources and media by some predefined criteria, the most common of wh fact-checked information most of the FCIs employ some mechanisms for information source evaluation (credibility, independence, etc). However, even those that do pay attention to the independence of sources rely only on human-expertise and subjective evaluations.
  • Almost all of the projects envisage political and human impact, and most of them are aware of the societal impact the work may have in general. Yet most of the FCIs do not track, monitor and evaluate any impact, which may jeopardize in practice the long-term results and, generally, the far reaching effect envisaged.
  • Many of the FCIs provided evidence of agenda-setting impact (e.g. legacy media referencing the results of their work) as part of their effectiveness assessment. This, in and of itself, speaks of the importance of fact-checking efforts in complementing the existing strategies for combating information disorder.
  • Considering the distribution of users reached over the duration of a given project, it can be noticed that most of the projects have similar rate of expansion of their user base, with the oldest projects having significantly larger audience.
  • Little considerations were reported with regard to sustainability plans and long-term goals of the FCIs.
  • There appears to be strong collaboration among most of the EU FCIs reported by the respondents of the survey. However, there is also a significant overlap in the domain of acting when there are two or more separate FCIs in a single country.
  • The transparency of the majority of FCIs (in terms of methodology, funding and operation) remains blurred. There is little willingness to be transparent about key information by the majority of FCIs in Europe. This calls for appropriate guarantees for higher openness and transparency of the activities of these organizations, more intense civil society and public involvement.
  • The number of people engaged in the fact-checking process varies greatly among organizations (from 3 to 30). We noted that almost two thirds of the FCIs report that their staff has been increasing over time.
  • There is high interest and potential for involvement into the regulatory issues related to the combating of information disorder online on both national and European level. Concise national and EU strategies in this respect should be elaborated that include the FCIs.

B. Key Recommendations

The key recommendations extracted based on the empirical study address the main stakeholders on whose active cooperation the successful fight against information disorder depends:

i. The public sector

On the basis of our observations and conclusions, a case can be made that efficient and effective efforts for combating information disorder demands for more focused and persistent efforts on the part of the states and the European institutions and may soon become an element of a general set of cybersecurity measures. In this respect, the idea of enlarging the scope of the Budapest Convention on Cybercrime through the adoption of a new additional protocol or amending the existing Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems can be considered as a possible solution in extreme cases of information disorder. In the EU context, the revision of the Audio-visual Media Services Directive leading to harmonised measures against fake news distribution could be seen as a proper step forward.

Fact-checking and debunking activities, although helpful, do not solve the urgent and persistent problems with information disorder which imperil basic social principles and values. There is a need for a fundamental change in the communication policies (e.g. causal explanations), educational policies (e.g. in media literacy curricula) and in the regulatory policies and practices. Effective implementation of the policies adopted through a complex set of measures is mostly needed on a national and European level.

ii. The business sector

The business sector should follow and support the overall process of combating information disorder, including through the direct support of the FCIs. Businesses should be vigilant and creative in responding adequately. They should strive to bootstrap the adoption of new software solutions and to facilitate the technical support and advice to the organizations that target and fight information disorder. Although social platforms are used to promote and disseminate the work of the FCIs, an interactive mode of promotion could be an obvious point where improvements can be sought and achieved.

iii. Civil society

The efforts of a stronger and independent civil society should underpin the process of awareness-raising among stakeholders and the public at large in order to give more credibility to the FCIs work and raise the publicity of the issues related to combating information disorder. At the same time, the civil society debate over the activities of FCIs is an essential element of the overall scrutiny in a democratic society aiming at greater openness and transparency, and with that, facilitating the trust establishment among people, platforms, organizations, institutions, and societies per se.

iv. Fact-checking and debunking organisations and initiatives

FCIs should broaden their methodological means for approaching fact-checking issues. This includes relying on variety of experts from different fields, and being open for employing novel approaches including computational semantic analysis. The latter even appears to be urgent, considering the limited, imperfect, slow and costly human-based approaches to fact-checking, and, moreover, the emergence of artificial intelligence techniques for creation and distribution of fake news and information in general. In this respect, cooperation with IT companies and the business sector at large proves crucial.

FCIs should increase and adjust their efforts in wider coverage of specific visual (photos) and audio-visual (video) materials. Finally, there is a need for proper sustainability and business plans of the FCIs to be in place.

v. Information Technologies community

The technical community should support the innovation efforts of FCIs and enable them to be up-to-date with their operational and methodological means, but also effective and efficient in their fight against information disorder. The support must come through concrete solutions and products adjustable to the context of operation of the FCIs.

vi. Academic community

The academic community must put efforts in the exploration of the structure, organizational culture and functions of FCIs with the purpose of improving their performance. Carrying out fast-track, yet complex scientific performance assessment of all EU FCIs based on alternative methodologies (possibly in comparison with other non-EU FCIs) is both desirable and efficient in dealing with emergent information disorder issues. It would be also useful to carry out performance assessment of EU FCIs based on additional set of criteria/indicators, such as economic indicators (e.g. ROI, cost per output, etc. at least as indicative measures).

vii. The media

The media on a national, European and global level should give larger publicity to the work and achievements of FCIs. The cooperation between the media accountability bodies and the network of FCIs can prove to be a fruitful undertaking in the process of combating fake news and information disorder in general.

C. Discussion

Although performance analysis was at the heart of the empirical study, the study itself supports a more general hypothesis: although lack of trust is mainly connected to transparency issues, the root of the problem goes few steps back to the provision of incentives for fact-checking organizations to be transparent in the first place. This implies having the willingness to share both the results and the problems rising from the organizations’ work. This, in turn, calls for proper inter-relations between all stakeholders concerned by the information disorder problem. It is only by a systemic approach that the complete chain of trust can be established in the content, the platforms, the service-providers, and the whole Internet value-chain. The human-centricity of all the systems and the blurred borders between providers and consumers in the digital age calls for a holistic approach in the design, analyses, and the maintenance of technological systems on which the whole society relies. Among the most important realizations of this work, however, is that trust is not something that can be embedded into the design of the systems, but something that emerges out of the interactions among the entities comprising the systems. Therefore, the study extracts specific stakeholder recommendations for combatting the problem of information disorder and revitalizing the trust chains on the Internet.

VI. CONCLUSION

The study analysed the problem of information disorder on the Internet through the fake news phenomenon and established its relation to the socio-economic phenomenon of trust. It is a contribution to the development of fact-checking systems, the combat against information disorder in general, but more importantly, to the debate on how to make the Internet a trustworthy habitat of both virtual and non-virtual entities.

ACKNOWLEDGMENTS

The author would like to acknowledge that this work was supported by the EU H2020 CSA project COMPACT, with Grant agreement No 762128 and was performed as part of the activities within the project. The author also expresses her thankfulness for and acknowledges the contribution and help by Dr. Bissera Zankova and Dr. Andrej Skolkay in carrying out the presented study.

REFERENCES

[1] Nielsen, R. Kleis. Digital News Report 2017. Oxford: Reuters Institute for the Study of Journalism, 2017.
[2] Stencel, M., “A big year for fact-checking, but not for new U.S. fact-checkers.” Reporterslab, https://reporterslab.org/big-year-fact-checking-not-new-u-s-fact-checkers, 2017.
[3] Bae Brandtzaeg, P. and A. Følstad. “Trust and Distrust in Online Fact-Checking Services.” Communications of the ACM 60 (9): 65-71, 2017.
[4] Graves, L. and F. Cherubini. The Rise of Fact-checking Sites in Europe. Reuters Institute for the Study of Journalism, 2016.
[5] Marwick, A. and R.Lewis. Media Manipulation and Disinformation Online. Data & Society Research Institute, 2017.
[6] Rubin, V. L., Y. Chen, and N. J. Conroy. “Deception detection for news: Three types of fakes.” Proc. Assoc. info. Sci. Tech. 52: 1-4,2015.
[7] Allcott, H and M. Gentzkow. “Social media and fake news in the 2016 election.” Journal of Economic Perspectives 31 (2): 211-236, 2017.
[8] Hirst, M. “Towards a political economy of fake news.” The Political Economy of Communication 5(2): 82 – 94, 2017.
[9] Wardle, C. and H. Derakhshan. Information Disorder. Toward an interdisciplinary framework for research and policymaking. Strasbourg: Council of Europe, 2017.
[10] Chengcheng Shao, G. Luca Ciampaglia, A. Flammini and F. Menczer. “Hoaxy: A Platform for Tracking Online Misinformation.” In Proceedings of the 25th International Conference Companion on World Wide Web (WWW ’16 Companion)., Republic and Canton of Geneva, Switzerland, 2016.
[11] Rubin, V.L., and N. Conroy. “Discerning truth from deception: Human judgements & automation efforts.” First Monday 17: 3 – 5, 2012.
[12] Wineburg, S., S.McGrew, J.Breakstone, and T. Ortega. Evaluating Information: The Cornerstone of Civic Online Reasoning. Stanford Digital Repository, 2016.
[13] Tambuscio, M., G.Ruffo, A. Flammini, and F. Menczer. “Fact-checking Effect on Viral Hoaxes: A Model of Misinformation Spread in Social Networks.” In Proceedings of the 24th International Conference on World Wide Web (WWW ’15 Companion), 977-982. New York, NY: ACM, 2015.

The information is prepared by the team of the COMPACT project.

COMPACT is a Coordination and Support Action funded European Commission under framework Horizon 2020.

The objective of the COMPACT project is to increase awareness (including scientific, political, cultural, legal, economic and technical areas) of the latest technological discoveries among key stakeholders in the context of social media and convergence. The project will offer analyses and road maps of related initiatives. In addition, extensive research on policies and regulatory frameworks in media and content will be developed.

Let's talk

If you want to get a free consultation without any obligations, fill in the form below and we'll get in touch with you.
[contact-form-7 404 "Not Found"]