Advertisement

It’s time to focus on information warfare’s hard questions

Our collective obsession with information operations are distracting from more fundamental questions about online influence.
Russian President Vladimir Putin answers questions from Russian and foreign internet users during a live press conference on July 6, 2006. (DENIS SINYAKOV/AFP via Getty Images)

In 2016, Russia sparked our current era’s obsession with online information operations. By meddling in that year’s U.S. presidential election via a plethora of online tools, Moscow’s operatives illustrated what seemed like the boundless potential of digital manipulation.

Since then, social media companies and governments have made massive investments in catching these efforts. As a report published by Facebook parent company Meta at the tail end of 2022 illustrates, these efforts appear to have reached something of an equilibrium with Russian information operators. Russia, along with several other states, still run malign online information operations, but these campaigns to influence public opinion are detected and taken down with such speed that they rarely reach significant audiences.

This state of equilibrium means that it’s high time to ask more fundamental questions about online information operations and the resources being mustered in countering them. Such efforts — and the coverage of them — means that our collective attention is far more focused on content and mechanics, rather than real-world impact and our information ecosystem more broadly.

Six years into our collective preoccupation with information operations and how platforms wrestle with them, the question of whether they even work in the first place — and if so, how — has gotten lost. The incentives for all parties — platforms, governments and illicit actors alike — are stacked in favor of operating on the assumption that they do, while the science looks inconclusive at best.

Advertisement

Meta capped off 2022 by detailing how it has performed more than 200 takedowns of covert influence operations on its platforms, the culmination of a strategy first used against Russian actors in 2017. In a short five years, Facebook’s threat analysts have arguably served as the vanguard of a new industry — monitoring and countering malign activity online.

Five years on, this industry and those responsible for carrying out information operations — in particular, Russia — have become co-dependents. Facebook’s business-model is premised on keeping users engaged. A social media platform on which advertisers can influence the preference of engaged users and drive purchases is a lucrative one. And online engagement is driven in large part by outrage. Highly refined controversy, meanwhile, is precisely what Russian troll and bot-operators produce best. Moscow’s security services, oligarchs, and news advertisers dole out lucrative contracts for it.

This co-dependence is reinforced by iteration on both sides, which naturally creates new opportunities and vulnerabilities to exploit. Illicit actors hone their techniques to influence audiences, while Meta refines its own techniques to detect them, often spurring entirely new product offerings in the process.

The media routinely covers actions on either side — the defenders and the attackers — of this co-dependent relationship, and this coverage broadly legitimizes the value of their respective efforts. At a certain point, both offense (information operations by malign actors) and defense (takedowns by platforms) become the same thing — engagement. Though they are being caught, Russian operatives can capitalize on media coverage, using the notoriety to land additional funding and clout. At the same time, Meta’s threat hunters have taken on more prominence — acting as a leading hub for coordination with civil society, government, and research organizations seeking to clean up the information environment.    

This observation is not a critique of Meta, whose global threat disruption team is unquestionably expert and an unalloyed societal good. Rather, it illustrates the equilibrium that Meta and Moscow have reached — a point beyond which neither side’s best efforts are unlikely to advance. Pitted against well-resourced state and quasi-state actors, Meta achieving this state of parity is no small feat. But what it might signify about information warfare more broadly, including its prospects and limitations in disrupting societies, remains to be seen.

Advertisement

This is not the first time the tech sector has grappled with such questions. In 1948, the cybernetics and systems theory pioneer Ross Ashby described “homeostasis” as a relatively stable equilibrium between interdependent elements. He demonstrated this state with an invention he called a “homeostat machine,” which was comprised of four aluminum cubes, bound together by an electronic gearbox at the base. As the scholar Thomas Rid documents in his history of cybernetics, “when the machine was switched on, the magnets in one cube would be moved by the electrical currents from the others. The magnets’ movements, in turn, altered the currents, which then changed the movements again, and so on.” Whatever perturbation the machine encountered, “it soon found a way to adapt to the new conditions,” moderating otherwise wild swings back into calm, buzzing alignment. Ashby deemed this “coordinated activity to restore balance” a kind of brain-like decision-making, since the unit could determine how to designate current to maintain balance in response to almost 400,000 possible variations.

To some, the invention was a breakthrough for theorizing about machine-learning and automation, to others, a purposeless perpetual-motion machine. Anthropologists and philosophers looked at this “black box” and found it a genius representation of the learning characteristics of organisms and their environments. Skeptics, including the early computing pioneer Julian Bigelow, found the contraption to “be a beautiful replica of something, but heaven only knows what.” As Rid points out, while the device “exhibited goal-seeking behavior,” the goal itself was largely a matter of interpretation, to the extent that it was apparent at all.

Meta’s work in countering what the company calls  “Coordinated Inauthentic Behavior” represents a similarly ingenious adaptation to the currents of adversary tradecraft on its platforms. However, the most apparent, if inadvertent, goal seems less about proving the wholesale futility of information operations, and more about refining the disciplines of attempting and detecting them at scale. In that regard, overemphasis on social media manipulation risks both distracting from and reinforcing two notions in need of greater scrutiny: first, that platforms are neutral conduits, co-equal victims with their users of external forces largely beyond their own control; second, and far more importantly, that online influence operations necessarily work in the first place.

For governments and civil society organizations grappling with ways to counter information-based threats, validating these notions will be key to ensuring effective policy interventions over the longer term — and to justifying their efforts and resource expenditures in the meantime. In other words, we need a better understanding of how humans interact with information.

In particular, the degree to which social media use is a causal factor — rather than a mere corollary — of human behavior and belief formation remains largely mysterious. Militaries thus struggle with how to measure the effectiveness of their information operations. Academics thus strain to break beyond largely Western-centric, platform-specific data stores to analyze. Meanwhile, it is hard to imagine which scenario would be more cataclysmic: conclusive proof that societies are manipulable under an optimal set of online circumstances or that online influence operations are at best a crapshoot. For the advertising industry, intelligence and military officials worldwide and the media, the mere prospect that information operations might work has long been sufficient to warrant the attempt. A decade or more of social media ubiquity seems to have done more to cement that prospect in place than to validate or refute it.

Advertisement

As scholars at the Carnegie Endowment for International Peace recently argued, the information environment is “an adaptive system, growing in complexity with the emergence of new social norms and technologies. And we barely understand how it all works.” Social media use is a major piece of that puzzle but certainly not all of it. More work remains in cognitive, social and behavioral sciences — even independent of social media — to map human behavior to information consumed. To the extent Meta has helped us get there, my hat is off. Like Bigelow before me, I know this benchmark signifies something about how we’re grappling with the information age, but heaven only knows what.

Gavin Wilde is a senior fellow in the Technology and International Affairs program at the Carnegie Endowment for International Peace.

Gavin Wilde

Written by Gavin Wilde

Gavin Wilde is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he applies his expertise on Russia and information warfare to examine the strategic challenges posed by cyber and influence operations, propaganda, and emerging technologies.

Latest Podcasts