Skip to content

The Terrain of Discourse

It says something about Donald Trump’s presidency that it is difficult to distinguish his final days in office from his final days on social media. He was the tweeting president, the man of all-caps missives who battled mainstream media and communicated directly, often wildly, to the public through social media. Whatever one thinks of Trump and the events of last January, we must remember that Twitter, Facebook, and others deplatformed him while he was still a sitting president. This was a watershed event. And it underscores what may be the most important questions in politics today: Who sets the rules of democratic discourse? What shapes the playing field?

The battle of ideas has always been fought through a terrain of media, laws, technologies, and cultural norms. Today’s information environment raises the stakes because it is globalized, participatory, and consolidated on relatively few platforms. Changes in the algo­rithms used by YouTube, Facebook, or Twitter can shape discourse around the world. New technologies like artificial intelligence or blockchain promise to do the same on a comparable scale.

Regulatory changes can have a similar impact on discourse. Con­sider the huge potential ramifications of changes to Section 230, the law that provides immunity for platforms hosting third-party con­tent, or of antitrust enforcement. Geopolitical events shape dis­course, too. Think of 9/11, the end of the Cold War, or even Covid-19.

The purpose of this essay is to propose “the terrain of discourse” as a formulation to advance our understanding of the information environment. In it, I explore a few dimensions of this terrain through recent books. I also share some implications of “terrain thinking.” What this analysis reveals is that shaping the terrain of discourse has become a leading instrument of power in today’s world.

Defining the Terrain of Discourse

The terrain of discourse, as I define it, is the multidimensional operating environment through which discourse flows. It is the field of play for discourse and the battlespace for influence. It includes everything in the narrative and informational space that shapes human behavior. It is a dynamic and complex system.

The “marketplace of ideas” is core to our understanding of liberal­ism and free expression. But it is almost always used as an abstract concept instead of something we can describe and understand in concrete terms. What are the policies, laws, and unspoken norms of today’s so-called marketplace of ideas? What are its characteristics? Is it more like the Mall of America or a Middle Eastern bazaar? What is its scope? Just as retail marketplaces can operate very differently, presumably the same is true for marketplaces of ideas.

The terrain of discourse formulation advances the way we think about these issues. It encourages us to understand and communicate the information environment at a deeper and more granular level, just as a topographical map provides a way to understand and communicate a physical terrain.

Current military doctrine defines the information environment as comprising three dimensions: physical, informational, and human. My working model of the terrain of discourse has eight dimensions: technologies, institutions, laws, cultural norms, terms of service, economics, geopolitics, and personal psychology.

Terrain thinking has implications across multiple fields. In the national security realm, it could expand the scope of strategic communications, information operations, and diplomacy. Consider the challenge of hack-and-leak operations as an example. The con­ventional approach would be to identify and prosecute perpetrators of the hack. Terrain thinkers might ask: what would it look like to shift the terrain of discourse to disincentivize these operations and mitigate their effects? And importantly, what might be the secondary effects of these moves? As warfare evolves from the competition of violence to the competition of narrative, managing the terrain of discourse will take on greater, even existential, significance.

In domestic policymaking, terrain thinking provides a new lens for analyzing the impact of laws and regulations beyond economics. Consider a potential breakup of Google or Facebook. The conventional approach would look at this as an antitrust matter situated in an economic marketplace. Terrain thinkers might look at it more broad­ly, asking: What would be the impact of breaking up Facebook or Google on free expression, democratic health, or geopolitics? I would argue that private companies having a monopoly‑like dominance on the hosting of public discourse is a net negative for democratic health, privacy, and free expression. Terrain thinking pushes us to ask how we might measure this.

In retail politics, we have already seen a tactical evolution towards “working the refs” and “shaping the playing field,” particularly on the left. For those opposing Trump, for example, the conventional communications approach was to counter-message Trump’s every tweet and fight a hashtag-driven messaging battle with his supporters. The terrain thinking approach, on the other hand, sought to deplatform Trump and change social media algorithms and policies to dis­advantage him and his supporters. And it worked—but at what cost?

Terrain thinking raises questions in political theory and sociology as well. What does sovereignty mean in a globalized information envi­ronment, where decisions at a Palo Alto company impact elections in other countries? What does freedom of expression mean in today’s information environment? Who gets to define disinformation and on what basis?

Laws, Institutions, and Internet Governance

In Speech Police: The Global Struggle to Govern the Internet, David Kaye looks at internet governance as a dimension of today’s terrain of discourse. Kaye, a law professor and former United Nations special rapporteur on free expression, advocates for more democratic govern­ance of internet speech. For him, this means conforming state and company policies to international human rights standards.

Kaye’s book is delightfully slim. His persistent question is, “Who’s in charge?” More specifically, “Who is in charge of policing speech, protecting individual expression, and adjudicating speech online?” Kaye believes this is an urgent question for democratic societies given how significantly internet speech influences them. He laments that tech companies play an outsized governance role by default, although some policies like Europe’s “right to be forgotten” do force reconciliation.

Kaye offers a behind-the-scenes look at how tech platforms currently govern these issues. He describes attending a Facebook “Content Policy Forum.” These are mini-legislative sessions within Facebook in which the company’s content teams address speech policy questions and decide how to improve their policies. Kaye paints a picture of a professional governance operation staffed by earnest employees. But a larger question looms: why should Facebook decide who is permitted to participate in what many would regard as a huge corner of the digital public square?

Kaye notes the growing use of artificial intelligence (AI) to help with content moderation. Tech CEOs like Mark Zuckerberg view AI as essential given the volume of information that flows through plat­forms. Kaye, however, is concerned that AI keeps content regulation hidden from public accountability.

He describes the growth of Internet Referral Units (IRUs) in various countries to help police extremist and terrorist content. For example, the British government’s Counter-Terrorism Internet Refer­ral Unit (ctiru) has monitored terrorist content on social media since 2010. The ctiru does not have the power to order content removals, but it can request them. As of April 2018, the ctiru claimed to have successfully requested the removal of 304,000 pieces of terrorist-relat­ed material.

For Kaye, the combination of IRUs and company content mod­erators is not enough. In what some might view as a plea for government censorship, he writes:

If governments believe content to be problematic or dangerous or harmful, removal of that content should be subject to demo­cratic principles: adoption of new laws by regular legislative process, subject to judicial or other independent challenge, demonstration by government of the necessity and proportionality of action. But with IRUs and outsourced content moderators, that is simply not happening.

The upshot for Kaye is that the internet is no longer a democratic space and the only way to claw it back is through a radical rethink of content governance. For him this means conforming tech policies and government regulations to international human rights standards.

“Government regulation should monitor company behavior, pro­tect the space for individual expression, reinforce the need for trans­parency by the companies and themselves, and invest in the infrastructure necessary for freedom of expression in their countries,” he writes. Kaye goes so far as to advocate public service media online.

The problem with pushing too much power to governments, how­ever, is that they do not always comply with the international standards Kaye supports. With 195 countries in the world, one can only imagine how some of them would use policing powers to crush dissent and violate the human rights Kaye claims to cherish. This is already happening in places like Malaysia, where the government recently used emergency powers to impose “fake news” laws to penalize “wholly or partly false views” related to Covid. Kaye is aware of these issues, but he does not address them enough in Speech Police.

Still, Kaye’s book is an important contribution to the debate about regulating online content. The changes Kaye prescribes would dra­matically alter the terrain of discourse by applying international human rights laws to all stages of content regulation. This would shift power to international law and to government authorities, for better and for worse.

Personal Identity and Psychology

This Is Not Propaganda: Adventures in the War Against Reality is Peter Pomerantsev’s guide to a world consumed by information war­fare. As some may recall, Pomerantsev penned the 2014 book Noth­ing Is True and Everything Is Possible, a memoir chronicling his experiences working in TV in Putin’s Russia. He portrays contemporary Russia as a sort of kleptocracy whose state media manipulates the truth as if it were reality TV.

This Is Not Propaganda looks at the expansion of online propaganda and information warfare over the last decade. Chapter by chapter, Pomerantsev takes us on a world tour of nefarious information operations, many of which will be familiar to those studying these issues. In chapter one, “Cities of Trolls,” he takes us from Manila to St. Petersburg, detailing the rise of troll farms and state-sponsored trolling. Chapter two, “Democracy at Sea,” moves from the Balkans to Mexico to Estonia to cover new forms of information activism and their relationship with democracy and authoritarianism. Chapter three describes Russia’s disinformation and hybrid warfare efforts against the Ukraine and Estonia. Subsequent chapters address China, populism, and the weaponization of identity. From Donbas to Aleppo, Pomerantsev covers the hot spots of disinformation and online movements.

Pomerantsev blends reportage and analysis with his personal fami­ly history. At the beginning of each chapter, he shares letters and stories from family members who fled the USSR as dissidents when he was young. He and his family eventually landed in the UK, where his father Igor became a writer and broadcaster for the BBC’s Russia service.

The book is valuable to understanding the terrain of discourse because of its helpful descriptions of how contests for influence have changed, particularly with respect to the role of identity. For Pomer­antsev, modern politics has become a struggle over the construction of identity. Because social roles are more fluid and dynamic than in the past, our identities are too. He writes:

We are living in a time of pop-up populism, when the meaning of “the people” is in flux, we are constantly redefining who counts as an insider or outsider, and what it means to belong is never certain, as political identities burst and then are remade as something else. And in this game, the one who wins will be the one who can be most supple, rearranging the iron filings of disparate interests around new magnets of meaning.

Political identities, in sum, are part of the terrain of discourse.

Through stories about his family, Pomerantsev invites readers to connect the online propaganda themes he covers with his own iden­tity. This gives his book the feel of a memoir. He clearly takes these issues personally. At times this is frustrating because it is difficult to distinguish Pomerantsev’s objective analysis from his personal per­spectives and his own magnets of meaning.

For example, Pomerantsev is quick to blame political trends he does not like on propaganda. This is most noticeable when he writes about Trump, Brexit, and populism. Consider this passage on Trump:

That enough Americans could elect someone like Donald Trump with so little regard for making sense, whose many contradictory messages never add up to any stable meaning, was partly possible because enough voters felt they weren’t in­vested in any larger evidence-based future. Indeed, in his very incoherence lies the pleasure. All the madness you feel, you can now let it out and it’s okay. The joy of Trump is to validate the pleasure of spouting shit, the joy of pure emotion, often anger, without any sense.

Rather than grounding his assessment in academic work like cultural backlash theory, Pomerantsev shares personal opinions that conflate different issues.

There is a coming-of-age element to the book. In his chapter on identity, Pomerantsev writes about attending one of the “European Schools” set up to promote the EU when his family moved to Mu­nich from the UK. He says the school instilled in him a certain sense of European identity. Perhaps this is the source of his emotions about Brexit and populism?

Ultimately, Pomerantsev’s book betrays a personal quest for be­longing, for the certainty of hard facts, for restoring his conceptions of “freedom,” “democracy,” and “Europe.” It is hard not to relate to the author’s sense of ennui and identity confusion after studying mind-bending topics like disinformation. Pomerantsev ends the book with a story about his twin sons playfully identifying as Batman and Superman, as if these are the sort of pop-up identities he writes about.

This Is Not Propaganda both wittingly and unwittingly shows how our identities, epistemic biases, and personal histories are part of the terrain of discourse. The line between objective truth and personal truths can be as blurry as the distinction between normal politics and nefarious influence. This is not an excuse to deny objective facts or to dismiss legitimate concerns around influence and disinformation. It is simply to appreciate that there are cultural, historical, and psychological dimensions of this terrain, even among those who share democratic and liberal values.

Government Bureaucracies and Institutions

The capabilities, doctrines, and structures of military and diplomatic institutions play a significant role in the strategic communications landscape. A good illustration of this can be found in Richard Sten­gel’s book, Information Wars: How We Lost the Global Battle against Disinformation and What We Can Do about It. The book is an inside look at the U.S. government’s attempts to grapple with disinformation and information warfare waged by ISIS and Russia. Stengel, the former editor of Time, was at the front lines of these efforts in his role as undersecretary of state for public diplomacy and public affairs under President Obama.

Stengel assumed office in early 2014, just as concerns were rising over the Islamic State’s online propaganda activity. His journey is humbling and frustrating, in large part because of the State Department’s culture and bureaucracy. He describes an institution that is siloed, risk-averse, and centered around meetings. The computer systems were obsolete. There was resistance to embracing digital and social media. Even the turnstile to enter the State Department build­ing was stuck in an earlier era. “I thought that I had experienced bureaucracy at Time Inc. when I ran Time magazine, but that opera­tion was astonishingly lean compared with the State Department,” he writes.

Stengel comes across as capable within the context of the bureaucratic operating environment. Yet even the selection of Stengel, who came from “old media” and learned Twitter on the job, says something about the State Department’s readiness to address online influence. Stengel describes his first-ever tweet, which condemned Putin’s actions in Ukraine in 2014. Online trolls (presumably Rus­sian) were quick to dismiss and denigrate his messages. Foreign trolls notwithstanding, the State Department’s online messaging was objec­tively weak at the time.

Stengel eventually made progress in setting up counter-messaging centers. He inherited one such center called the Center for Strategic Counterterrorism Communications (CSCC), whose digital team attempted to counter online Islamist extremism with the widely mocked “think again, turn away” motto. Over the course of his ten­ure, he oversaw the creation of a counter-Russia messaging center based in Ukraine and a joint counter-ISIS messaging center in Abu Dhabi, which was named the Sawab Center (Sawab means “the right way”).

Stengel’s team learned, adapted, and made strides despite the limi­tations of State Department bureaucracy. Stengel describes some of the lessons he learned in countering disinformation. For example, the CSCC created a mock ISIS recruitment video called “Welcome to ISIS Land,” which was met with public derision. Stengel’s takeaway was that snark and irony aren’t appropriate for government. From that point on, he took a straightforward, just-the-facts approach to countering disinformation. This contrasted with Russian propagandists who operated under a different set of norms, including personalized attacks on him and other State Department officials.

Another lesson was to delegate and localize counter-messaging efforts. This is how the joint US-UAE counter-ISIS messaging center emerged. The Emirati team moved faster and was more in tune with the language and cultural nuances of the ideological battle. More generally, Stengel came to realize that credible third parties, journalists, and NGOs could produce more effective material than the State Department, although outsourcing propaganda raises other considerations.

Eventually the CSCC evolved into the Global Engagement Center (GEC). The idea was to create a hub for U.S. government counter-messaging efforts against the Islamic State, Russia, and others. Stengel describes the behind-the-scenes planning, culminating in a bipartisan bill signed by President Obama in December 2016. Congress made the GEC operational the following year with $80 million in funding. Three-quarters of the GEC’s budget was designated to counter Rus­sian influence operations. Then secretary of state Tillerson, however, who had no strategy of his own and was trapped in a lost bureaucratic battle between State Department careerists and President Trump, did not request the money for more than a year. That meant a professional cadre was never developed to build on Stengel’s work and drive innovation.

A new bureaucratic entity, a new bill, a new president, and a new GEC center—all of these institutional changes shaped U.S. capabilities and activities, with obvious implications for the terrain of dis­course. Stengel concludes the book with some reflections on the 2016 election, and in the final chapter, he offers ideas on what to do about disinformation. Among his best proposals is the suggestion that me­dia companies include original source material with their articles.

Stengel notes in his postscript that “information war is not a battle of technologies or platforms; it’s a battle of ideas.” This is true, but Stengel’s experiences suggest an addendum: information war is also a battle of institutions.

Data and Political Economy

Today’s data-driven economy is at the heart of global discourse, a point underscored by Shoshana Zuboff’s epic book, The Age of Sur­veillance Capitalism: The Fight for a Human Future at the New Fron­tier of Power. Surveillance Capitalism is a monster of a book, and Zuboff’s writing is reminiscent of Marx, Weber, and Durkheim: a sweeping, critical look at structural changes in the economy.

Zuboff, a Harvard Business School professor, defines surveillance capitalism as “a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, pre­diction, and sales.” The nagging question she uses to frame the book is, “Will this emerging information civilization be a place we can call home?” For Zuboff, the digital dream has darkened into something more sinister, parasitic, and exploitative.

A new kind of power, instrumentarian power, is at the heart of surveillance capital. This is the power to understand and shape human behaviors towards others’ ends—to manipulate people using their personal data. Surveillance capitalism is not just about surveillance or data. It is a form of rogue capitalism. Zuboff writes:

Just as industrial capital was driven to the continuous intensification of the means of production, so surveillance capitalists and their market players are now locked into the continuous intensification of the means of behavioral modification and the gathering might of instrumentarian power. . . .

As long as surveillance capitalism and its behavioral futures markets are allowed to thrive, ownership of the new means of behavioral modification eclipses ownership of the means of production as the fountainhead of capitalist wealth and power in the twenty-first century.

According to Zuboff, Google pioneered surveillance capitalism the same way Ford and General Motors pioneered managerial capitalism. Pay-per-click advertising and web cookies eventually gave rise to more sophisticated data extraction based on user data, which she calls “behavioral surplus.” The data-driven advertising model spread to other companies, including Facebook, which turned its social graph of likes and personal information into an advertising behemoth. Zu­boff notes that Sheryl Sandberg led the expansion of AdWords for Google before joining Facebook and building its advertising engine. Today, surveillance capitalism is the default model of information capitalism on the web, generating billions in annual revenue. Even telecom utilities like Verizon are mining behavioral data for revenue.

Surveillance capitalism is the political economy of data-driven in­fluence. It represents a restructuring of capitalism itself. Zuboff describes it as a “market-driven coup from above” that demands the “privilege of unfettered freedom and knowledge” without law or reg­ulation. For Zuboff, this is a profoundly antidemocratic social force.

It also represents a profound change in the terrain of discourse. In a sense, surveillance capitalism allows for a more entrenched, efficient, and scalable mode of propaganda. Both surveillance capitalism and strategic communications seek to shape behavior through information. One could argue that surveillance capitalism is disintermediating strategic communications. The data exhaust of the information economy, combined with machine learning, has created an economic model that weaponizes data for influence and drives human behavior at massive scale.

Personal data at the deepest and most intimate level is knowable and actionable. This is scary, and it makes policy discussions around data ownership and privacy all the more urgent.

“Discourse Like Water”

A Washington, D.C.–based technology company used to have the slogan, “Information like water.” The company’s founder, Michael Saylor, envisioned that information might become as cheap and ubi­quitous as water. It was an evocative image in the late 1990s. I mention this because discourse, and in turn influence, is like water, too. It flows according to terrain. Changes in this terrain—whether technology, market dynamics, policy, or institutions—shape democratic discourse.

This essay has sought to conceptualize the terrain of discourse and explore its current features through the lens of a few recent books. In addition, there are several larger implications of terrain thinking for communications practitioners and other stakeholders, whether in government, military, politics, academia, or the private sector.

First, assessments of the terrain of discourse should go beyond target audience analysis, focus groups, and social media analytics to a broader set of dimensions including policies, innovations, norms, and institutions. Situational awareness needs to expand, and it may be worth exploring new frameworks of analysis—a cartography of the terrain, so to speak.

Second, it is critical to internalize that the terrain of discourse is constantly evolving. Thus we must learn to anticipate changes in terrain and understand their potential effects. Scenario planning may be helpful for thinking through hypotheticals. For instance, what would be the ramifications of an aggressive new Chinese propaganda push, a hack-and-leak of the genetic data of all U.S. Representatives, or mass adoption of facial recognition technology? How might the death of Rupert Murdoch impact the terrain of discourse? Any number of developments could have massive butterfly effects.

Third, we must consider when it makes sense to proactively shape the terrain of discourse and the ethics therein. Changing laws to establish personal ownership of biometric data, for example, could provide a bulwark against surveillance technologies and authoritarian threats. Likewise, those concerned about rising censorship of dissent­ing viewpoints might ponder how to nudge the terrain to ensure free expression. The important point is that the terrain of discourse can be nudged and shaped, which can be used for good and can also be dangerous. The ethics of this needs to be discussed more openly.

The conventional approach to policymaking and strategic com­munications is to accept the information environment as it is, with a linear, rudimentary understanding of it. It is to battle one set of messages with another set of messages. Terrain thinking, which has been the focus of this paper, pushes us to understand the information environment as a complex, dynamic, multidimensional environment that significantly impacts our societies and values like free expression. It approaches communications contests at the terrain level as well as the messaging level, as we saw with the deplatforming of Donald Trump (however one may feel about that).

Terrain thinking recognizes that the marketplace of ideas does not exist in a vacuum; that the free flow of ideas is not frictionless; that these abstract concepts, in reality, have shape and form that evolve across time and space. It recognizes that the ability to shape the ter­rain of discourse may be the ultimate expression of power in today’s world and should not be taken for granted.

This article originally appeared in American Affairs Volume V, Number 2 (Summer 2021): 192–203.

Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink