Disinformation Economies
The term cognitive warfare entered formal military doctrine relatively recently, but the reality it describes is as old as organised human conflict. Armies have always understood that defeating the enemy's will to fight is as decisive as destroying the enemy's capacity to do so. What Sun Tzu called the acme of skill, winning without fighting, is essentially a cognitive proposition: alter the adversary's perception of the situation sufficiently and physical confrontation becomes unnecessary. What is genuinely new in the twenty-first century is not the strategic objective but the industrial infrastructure through which it is pursued. Digital platforms reaching billions of people simultaneously, algorithmic amplification systems designed to maximise emotional engagement, artificial intelligence generating persuasive content at marginal cost, and a commercial market selling influence operations to any paying client have transformed cognitive warfare from a specialised state capability into a distributed, accessible and largely unregulated industry whose products shape electoral outcomes, military morale, alliance cohesion and public health behaviour across the entire world simultaneously.
The foundational insight of cognitive warfare in the digital era is that the infrastructure of manipulation was not built by intelligence agencies or military planners. It was built by advertising technology companies optimising for user engagement. The attention economy, the system through which digital platforms monetise the time and emotional investment of their users, created the precise conditions that make large-scale cognitive warfare possible: billions of people spending hours daily on platforms whose algorithms are designed to surface content that generates strong emotional reactions, with no structural distinction between accurate and inaccurate information, between authentic and synthetic content, or between organic and coordinated amplification. The tools of influence operations, micro-targeted messaging, emotional narrative framing, coordinated amplification networks and A/B testing of persuasive content, are the tools of digital marketing. They were normalised and diffused globally by the commercial technology sector before any military or intelligence agency began applying them to strategic influence objectives.
This origin matters because it defines the structural problem. Regulating disinformation operations is not simply a question of identifying and sanctioning state actors who misuse social media platforms. It requires confronting the fact that the platforms' core business model creates the conditions for manipulation to be effective and commercially rewarding for anyone who can generate engaging content, regardless of its accuracy or strategic intent. A state-sponsored disinformation network and a domestic partisan media operation and a financially motivated fake news farm and a genuinely passionate but factually incorrect grassroots community all interact with the same algorithmic incentive structure and produce overlapping effects on the information environment. Distinguishing between them, attributing specific effects to specific actors and designing targeted interventions is a technical and political challenge of a different order of magnitude from identifying and jamming a foreign radio broadcast.
Russia's Internet Research Agency, the St Petersburg-based organisation that conducted coordinated influence operations across US, European and African information environments from roughly 2014 onwards, is the most extensively documented state-run cognitive warfare operation in the contemporary period. The IRA's operations during the 2016 US presidential election, documented in exhaustive detail by the Mueller investigation and subsequent academic research, involved the creation of hundreds of fake American social media personas, the purchase of over $100,000 in targeted Facebook advertising, the organisation of real-world political events in the United States promoted through fake accounts, and the systematic amplification of divisive content on immigration, race and policing designed to deepen social polarisation rather than promote any specific policy outcome. The IRA was not primarily trying to elect a particular candidate. It was trying to damage the quality of US democratic discourse as a strategic end in itself.
China's information operations have followed a different strategic logic, focused more on narrative control regarding specific territorial and diplomatic disputes, the suppression of information about events the Chinese government wishes to limit internationally, and the promotion of Chinese foreign policy positions through coordinated diplomatic and media channels. The concept of the Three Warfares, public opinion warfare, psychological warfare and legal warfare, has been embedded in Chinese military doctrine since 2003 and represents a systematic framework for integrating cognitive operations into conventional military strategy. China's Sharp Power operations, documented extensively by Freedom House and the National Endowment for Democracy, involve the cultivation of media relationships, the funding of content in target countries' languages, the coordination of diaspora community messaging and the use of economic leverage to discourage critical coverage in commercial media organisations dependent on Chinese market access.
The strategic objective of advanced influence operations is not to convince audiences of a specific false narrative. It is to make them doubt whether reliable information exists at all. Sustained doubt is more durable than a specific lie and more difficult to correct.
The Meridian · April 2026Sub-Saharan Africa has become one of the most active and least regulated theatres of cognitive warfare, with multiple external actors conducting systematic influence operations across the continent's social media ecosystems to shape political perceptions, election outcomes and attitudes toward foreign military presences. The Sahel region, where France's withdrawal from Mali, Burkina Faso and Niger has been accompanied by surging anti-French sentiment across social media platforms, provides the clearest documented example of how cognitive warfare directly shapes geopolitical outcomes. Meta's transparency reports, Stanford Internet Observatory investigations and reporting by Africa Check and other regional fact-checking organisations have documented coordinated influence networks, linked to Russian-affiliated operators, systematically amplifying anti-French and pro-Russian narratives across French-speaking Africa through fake accounts, fabricated media content and coordinated engagement patterns.
The effectiveness of these operations in the Sahel context reflects a structural feature of the African information environment that external actors have exploited with considerable sophistication: the combination of high mobile internet penetration, low legacy media credibility, strong appetite for narratives that attribute Africa's development challenges to external actors, and limited institutional capacity for systematic content moderation or influence operation attribution. These are not conditions that external operators created. They are conditions that pre-existed the influence operations. But they represent the precise information environment in which cognitive warfare produces its most durable effects, because the structural conditions that make audiences receptive to specific narratives are themselves rooted in genuine historical grievances and legitimate political frustrations that accurate information and manipulated narratives both speak to simultaneously.
The integration of large language models, image synthesis systems and voice cloning technology into the cognitive warfare toolkit has qualitatively changed the cost structure and capability profile of influence operations in ways whose full strategic implications are still being absorbed. The most significant change is not that AI enables more convincing fakes, though it does. It is that AI eliminates the human labour cost that previously set a practical ceiling on the scale of influence operations any given actor could sustain. Producing thousands of contextually appropriate, linguistically natural social media posts in multiple languages, each tailored to the specific preoccupations of its target audience, previously required teams of human content producers with relevant cultural and linguistic competence. It now requires a prompt, a model and a distribution mechanism. The scaling constraint that previously limited influence operations to well-resourced state actors has been removed.
The defensive implications are equally significant. AI-generated synthetic media, including deepfake video, synthetic audio and AI-written text, is now frequently indistinguishable from authentic human-generated content by casual inspection, and increasingly difficult to detect even with dedicated forensic tools. The detection-generation race is structurally asymmetric in the same way that the offence-defence imbalance in cyber operations is asymmetric: generating convincing synthetic content requires a single successful generation attempt, while detection requires correctly classifying every piece of content encountered. As generation quality improves, the fraction of synthetic content that evades detection grows. The practical consequence is that the epistemic environment is becoming progressively less reliable at precisely the moment when complex global challenges require the public capacity for collective reasoning to function well.
In the weeks before Senegal's March 2024 presidential election, a coordinated network of social media accounts amplified fabricated content depicting opposition candidates making inflammatory statements they had never made. The content was generated using AI tools that had become accessible commercially at low cost within the preceding twelve months. The operations were documented in real time by fact-checking organisations including Africa Check and Dakar Actu, but the speed of amplification meant that a significant portion of the target audience had encountered the fabricated content before corrections were published. The incident illustrates a structural feature of AI-assisted cognitive warfare in emerging democracy contexts: the correction cycle operates on a timescale that is fundamentally slower than the dissemination cycle, and audiences exposed to fabricated content first are demonstrably less responsive to subsequent correction than audiences who encountered accurate information first. The first narrative to reach an audience retains a structural advantage regardless of its accuracy.
Governments, platforms and civil society organisations have invested significant resources in responding to cognitive warfare and disinformation operations since 2016. Content moderation at scale, transparency requirements for political advertising, coordinated inauthenticity detection and removal, media literacy education programmes and international frameworks for attribution and response have all been developed and deployed. The results have been mixed in ways that reflect fundamental structural tensions rather than mere implementation failures. Platform content moderation operates at a scale that makes systematic human review impossible and algorithmic review inconsistent. Transparency requirements for political advertising are effective for formal electoral campaigns and largely irrelevant for organic-seeming influence operations that route around advertising systems. Attribution of disinformation to specific state actors is technically challenging, politically contested and rarely produces accountability consequences proportionate to the strategic impact of the operations attributed.
The most consequential limitation is jurisdictional. Information systems are global. Regulation is national. A coordinated influence operation originating in one jurisdiction, routing through server infrastructure in a second, targeting audiences in a third and using content platforms headquartered in a fourth faces no single regulatory authority with jurisdiction over the entire chain. The European Union's Digital Services Act represents the most ambitious attempt to impose platform-level accountability for harmful content within a major jurisdiction, requiring large platforms to conduct and publish systematic risk assessments of their contribution to disinformation and to implement proportionate mitigations. Its implementation is ongoing and its long-term effectiveness remains to be demonstrated. Outside the European Union, regulatory frameworks are either less demanding, less consistently enforced or effectively absent. In the Global South countries that are the most active current theatres of cognitive warfare, national regulatory capacity is typically the weakest and the gap between the sophistication of the operations being conducted and the institutional capacity to detect and respond to them is the widest.
Information systems are global. Regulation is national. A coordinated influence operation can originate in one country, route through a second, target a third, and use platforms headquartered in a fourth. No single authority has jurisdiction over the entire chain.
The Meridian · April 2026Cognitive warfare is not a marginal or exotic dimension of modern conflict. It is increasingly the primary domain in which the political conditions for physical conflict are created or prevented, in which alliance cohesion is sustained or eroded, in which the domestic political will to sustain military commitments is maintained or undermined, and in which the international legitimacy that enables or constrains state action is constructed. The disinformation economy that has grown up around this domain is not a temporary aberration driven by a specific technology cycle. It is the product of structural incentives in the global information economy that will persist and intensify as AI reduces the cost of content production toward zero and as the gap between generation and detection capabilities compounds.
Cognitive warfare moves the focus from physical territory to perception, from weapons to narratives, from force to influence. In this system, control is not only about what happens. It is about how it is understood. And in that space, perception is not merely power. It is the battlefield itself.
April 2026 · War Economy Edition