The global media sphere recognizes information disorder as a serious challenge. Fabricated news has become more complicated and difficult to control. The spread of false content is progressing faster than the means currently in place to stop it. The consequences of this disorder are felt in everyday life. People are making critical decisions about their livelihoods, and political beliefs based on information that may be inaccurate.
The problem is shaped by the local context and emerges differently across societies. For instance, the nature of fake news varies by country to reflect local journalistic styles1 and news outlets. Therefore, it is harder to identify fake news, especially when it is embedded within credible narratives.
Emerging technologies contribute to the issue at speed and scale. Artificial intelligence (AI), when weaponized, can create realistic deepfakes, and social media platforms boost these outputs through algorithmic content distribution. These platforms, due to their speed, reach, and engagement mechanics, have become central actors in shaping public perception. Platform algorithms add another layer to the problem by reinforcing echo chambers and giving more attention to sensational content.
The World Economic Forum has identified misinformation and disinformation as a leading short-term global risk,2 with far-reaching implications beyond the 7% who directly perceive it as a threat. The spread of misinformation and disinformation are systemic weaknesses that allow foreign interference in elections, twist reality in conflict zones, and damages trust in international products.
Today, content creators can profit from viral fake news, unlike traditional media institutions that used to avoid risking reputational and financial loss by publishing false content. With minimal resources, it is possible to launch a website and reach audiences through some advertising platforms, such as Google AdSense.3
In response, the fact-checking movement has gained momentum as part of a broader shift toward effective journalism. Journalists are tasked with determining the accuracy of claims, and the role requires an interpretative approach involving the examination of claims against credible sources. Traditional news outlets now compete with other media and social networks for public attention. In many cases, competition favors engagement over accuracy. As a result, the truth has become less valuable, and public opinion is more influenced by emotions, political partisanship, and cognitive biases such as confirmation bias. Even journalists make decisions under social pressure and a desire to be validated by peers within the profession.
Information that favors an individual’s sectarian, regional, tribal, or political belief is more likely to be accepted and shared, even when it is misleading or false.
People connect with information through emotional, cultural, and psychological factors. In Iraq, trust is fragile due to decades of conflict, political instability, and foreign interventions. People often rely on emotionally meaningful narratives instead of verified facts. Human inclinations tend to be drawn to information that supports pre-existing beliefs, known as confirmation bias. For Iraqi individuals, information that favors an individual’s sectarian, tribal, or political belief is more likely to be accepted and shared, even when it is misleading or false. This is also a global phenomenon.
Repeated exposure to a claim not backed by evidence leads people to perceive it as true. This psychological inclination is another type of cognitive bias known as the illusory truth effect.4 In a fragmented media sphere, where the same narratives may be circulating by multiple partisan channels or social media accounts, even fake news and unverified claims appear believable, or the least hard to distinguish from the truth. This effect becomes influential during moments when emotions run high, people are overwhelmed, and the demand for answers increases. During these periods, information that appeals to emotions like fear, pride, or anger spreads much more rapidly than calm, fact-based reporting.
In Iraq, social media platforms are widely used, and they are built to keep people engaged, often by showing content that either support users' views or triggers emotional responses. As a result, information that causes disapproval and division is far more likely to gain visibility. This dynamic has led to the rise of echo chambers, exposing individuals to groups that share opinions that match their personal interests. These polarized online spaces have become the place for discussing national issues, like governance, foreign influence, and corruption. These spaces usually promote extreme and sensational views. Public opinions become divided along sectarian, ethnic, or ideological lines; for example, Sunnis and Shias, Kurds and Arabs, or secularists and religious groups. Some of the misleading contents circulating in Iraq today is real content used out of context. For example, an old video that is re-shared with a false caption or a genuine news story quoted to mislead.
Scholars describe the crisis in the spread of information as a state of information disorder. Addressing this disorder requires careful distinctions between its different forms: misinformation, disinformation, and malinformation. When false content is shared without harmful intent, often due to misunderstanding or lack of fact-checking, it is commonly referred to as misinformation. Disinformation, by contrast, is deliberately false and intended to cause harm. Malinformation, meanwhile, refers to genuine information that is leaked by exposing private data in the public sphere.
Tracking the spread of information disorders is a challenging task, especially on social media platforms. Technological advancement, like many digital phenomena, has presented both advantages and disadvantages. While technology has expanded access to both legacy and new social media platforms, it has also paved way for more diverse and dynamic forms of false or harmful content to emerge.
The motivations behind the dissemination of such content can vary. Some narratives are designed to manipulate public opinion for political purposes, while others are designed for profit. Certain false stories are meant to entertain, while some originate from genuine mistakes or even acts of desperation. In this complex environment, there is no universal model for deception. A viral conspiracy theory is not the same as a coordinated disinformation campaign led by a state.5
Graphs and visuals can also be misleading, especially when data is selectively presented or when scaling is changed, such as through Y-axis manipulation in bar graphs. These techniques are used to promote certain products by exaggerating their features and creating misleading comparisons between competing products.
The consequences of these acts undermine democratic processes, delay urgent climate action, and weaken public health campaigns. The impact of information disorder extends well beyond individual belief systems, as it threatens the cohesion and functioning of entire societies.
The Case for Iraq
The root of information disorder can be traced back to the Ba’athist era, when media acted as a propaganda tool to promote the regime’s ideology and evoke fear. After the fall of Saddam Hussein’s government in 2003, the media continued to serve political interests, but this time it operated within the influence of the U.S. presence and the new political environment it created.6 Disinformation has shifted form instead of dissipating. False content was again used to spread fear during the rise of ISIS in 2014. Therefore, for many Iraqis, encounters with fake news have become an expected part of daily media consumption.
A case of disinformation took place during the COVID-19 pandemic, when Reuters published a report stating that the official figures for COVID-19 infections in Iraq were being underreported. In response, the Iraqi authorities suspended the agency’s operations for three months.7 This reaction reveals the government's sensitivity to external investigation and the dangerous position of the press when reporting sensitive issues. In another incident and during anti-government protests, deepfake videos were shared that falsely viewed protestors as violent aggressors against security forces, creating doubt on the legitimacy of the movements. In another troubling case, a wave of fake videos targeted female candidates in the 2018 elections.8 As a result of intense public pressure, some of the candidates withdrew from the election campaign. These examples emphasize the impact of information disorders in Iraq on the three main aspects illustrated in figure 5.
Iraq’s media landscape is caught between two opposing forces. On one side there is a politicized media, often funded by political actors or business figures closely aligned with ruling parties.9 These outlets prioritize partisan interests over factual accuracy. On the other side are independent journalists, human rights defenders, and civic media producers who work to promote transparency. Nonetheless, even these voices face challenges, including intimidation and limited funding. As a result, journalism in Iraq operates in a contradictory space that is torn between the imperative to hold power to account and the pressures to maintain national unity and security. Nevertheless, there are efforts within the media community to push back against the spread of fake news. A meaningful culture of media accountability has begun to take shape due to independent verification initiatives, journalist networks on social media, and international partnerships with fact-checking organizations.
The practice of fact-checking has witnessed remarkable growth over the last decade. Numbers show an increase from just 11 in 2008 to 424 global fact-checking organizations in 2022.10 A key driver of this change has been the growing integration of fact-checking into journalism and research work of think tanks. In politically unstable environments such as Iraq, fact-checkers often work anonymously or remotely to protect their safety. One example is Tech4Peace, a platform with a focus on Iraq whose contributors include members based in Canada. Fact checkers play a critical role in identifying and verifying information disorders across digital platforms.
In the past, fact-checking was a manual process performed by professionals trained in critical thinking and research. This method proved effective, but also resource-intensive and slow to scale. In recent years, automated fact-checking tools as an application of AI started to process and verify information at large volumes.
However, these automated systems are not without limitations. The high volume of digital content and the pace of spreading narratives often surpass the capabilities of these technologies. Furthermore, certain forms of deception, such as deepfakes, continue to bypass automated detection and require human intervention.
Timing also plays a crucial role in fact-checking. On the one hand, correcting misinformation after exposure (debunking) can be most effective. On the other hand, the contrary (prebunking) may be more impactful, especially in dealing with conspiracy theories like the claims that led to vaccine hesitancy during COVID-19.11
In addition to these strategies, promoting digital literacy and critical thinking skills among social media users is a necessary step. As information disorder evolves, fact-checking mechanism should continue to advance. Blockchain offers potential for verifying content sources as a cutting-edge strategy. Other means, such as reverse image search tools of search engines and subscribing to global fact-checking services such as PolitiFact and Snopes, can help users and news agencies independently assess the credibility of content.