Skip to content

Anti-Social Media: A Modest Proposal for Significant Restraint

Growing, nigh-incontrovertible evidence suggests a nexus between heavy social media use and mental health issues in children and young adults, prompting numerous lawsuits against major tech compa­nies, including TikTok, Meta, and Snap. Seattle Public Schools’ recent lawsuit, for example, accuses these social media giants of contributing to a youth mental health crisis. Many researchers have found that the negative effects of social media on minors and young adults far outweigh any benefits, with a substantial link to clinical depression, particularly among girls.

Social media platforms can inflict emotional and psychological harm on children in several ways. For example, social comparison can lead to feelings of inadequacy as others’ lives appear more glamorous online. Displacement inevitably occurs when excessive time spent online inter­feres with sleep or in-person social interactions. Additionally, algorithms can push children toward unhealthy content, such as information about eating disorders, while pornography is made more acces­sible to young people through social media platforms.

Moreover, concerns over national security have led to the introduction of full, partial, or public sector bans on TikTok in more than a dozen countries worldwide. Private companies have also started block­ing the app while the U.S. government considers a complete ban if TikTok’s Chinese owners do not sell the U.S. version. Russia’s VKontakte (VK) has faced bans and restrictions across different countries, most notably in Ukraine, where it was banned in May 2017 due to increasing political tensions between the two countries. Following the Russian military invasion of Ukraine in 2022, the platform faced consequences from international sanctions.

In this article, we aim to highlight and examine another grave danger arising from social media: the impact of excessive social media consumption on the ability to comprehend and engage with arguments presented in written form. This skill, which America’s founding fathers considered of highest importance to the health of the republic, has been in significant decline in the Internet Age. This decline can be attributed not only to the pervasive consumption of visual media (e.g., photographs and screenshots on Tumblr, short videos on Instagram and TikTok), but also to the distinctive manner in which people interact with and respond to content on ostensibly text-rich social media platforms like Twitter and Facebook. The primary objective of this article is to emphasize the extent to which social media has undermined our collective ability to process and interpret information effectively, then to propose potential solutions that, irrespective of their likelihood of implementation, could at least mitigate the ongoing shift in communication and degradation of public discourse.

We begin by outlining the problem, highlighting the degraded cogni­tive abilities resulting from excessive social media consumption. This decline has led to a deterioration of the public sphere, where intellectuals and “infotainment” personalities are indiscriminately lumped together, and the real-time devolution of intellectuals in the public sphere into peddlers of emotional rhetoric—all while social media platforms insulat­ed by U.S. free speech protections and liability exemptions rake in billions from advertising and data harvesting. Speech on social media, then, is neither “free” or even “free to all”; it is essentially a for-profit enterprise that harvests the unpaid labor of its users while gradually damaging their cognitive abilities and overall mental health.

From there, we examine the role of the medium in shaping communication, examining oral, literary, and finally social media forms of human communication. We also consider the implications of pseudo-communi­cation and the influence of artificial intelligence (AI) on our interactions—a seemingly minor concern now, but actually a sleeping (and soon-to-wake) giant that is likely to have a major impact on the nature of social media communication.

Finally, we present potential solutions, which include personal approaches, such as logging off to read, write, and research (although this may not be feasible for many absent governmental “nudging” or more severe restrictions), or finding ways to navigate the swamp of social media without being overly concerned with the opinions of most participants, as reasoned arguments are not the primary focus in this environment.

More complex solutions involve incrementally altering administrative rules to incentivize specific online behaviors while prohibiting or ban­ning others, such as TikTok, for “reasons of national security.” A simple but unlikely solution is throttling or limiting access to social media as a public safety measure—e.g., “locking down Facebook and Twitter” for a twelve-week holiday to “stop the spread” of degraded discourse. Most radical of all would be an outright ban of platforms that meet certain thresholds of use, profitability, and lack of socially redeeming value, leading to drastic restrictions of the sort VK faced in Ukraine.

Each of these proposed solutions aims to address the negative consequences of excessive social media consumption and its impact on our ability to process and engage with written arguments effectively. While the weight of popular opinion appears to be on the side of “free speech” and the continued use of these social media platforms in ways that are at best idle and at worst deleterious to the cognitive capacity of the nation, we would be remiss if we did not at least attempt to envision the radical proposals demanded by this pivotal moment in the history of communication.

A Clear and Present Danger to Cognition

Although a full survey of the dangers to cognition presented by social media use is too complex and too vast in scope to include in this article, we will present some of the highlights of this literature, a good deal of which suggests that these dangers to human cognitive capacity are both real and imminent. In one of the most extensive studies of social media usage and literacy undertaken to date, Yvonne Kelly at University College London tracked eleven thousand children and found that time spent on social media distracted them from reading and homework, leading to lower literacy levels. Her study’s findings suggest a connection between the amount of time spent on social media and literacy levels, with both boys and girls equally affected. Another study by Rebecca Dore et al. (2020) investigated the associations between chil­dren’s media use and language and literacy skills. The results indicated that spending four or more hours on social media per day was related to lower literacy gains but not to language gains.

In addition to these literacy concerns, Cara Booker from Essex University’s Institute for Social and Economic Research has argued that the use of shorthand words online may be reducing young people’s literacy, while communication skills in face-to-face situations have declined since social media became more widespread. A vast array of other research compiled by Jean Twenge and Jonathan Haidt has shown that frequent social media users are more likely to suffer from depression due to factors such as lack of sleep, increased risk of cyber­bullying, and low self-esteem caused by artificial social standings pro­moted by social media.

Claire Midgley et al. (2020) found that more frequent and extreme upward comparisons made through social media result in immediate declines in self-evaluations and cumulative negative effects on individuals’ stated self-esteem, mood, and life satisfaction. Low-self-esteem indi­viduals are particularly vulnerable to peer pressure, which in turn has a statistically significant relationship with poor learning and literacy out­comes. Social media comparisons have a greater impact on self-evaluations than those made in other contexts.

Of course, all of this accords with a wealth of “I know it when I see it” anecdotal evidence reported by teachers, professors, and other educa­tors. In an April 2023 Al-Jazeera article in this vein, Greg Wrenn, an associate professor of English at James Madison University, discussed the detrimental effects of social media, particularly TikTok, on his students. He observed that students are increasingly anxious, strug­gling to concentrate on lectures and readings, and unable to fully appre­ciate literature and nature due to overstimulation from social media—leading to obvious deterioration, right before his eyes, of their academic performance and mental health.

For Wrenn, as for many other educators, this crisis of attention exacerbates the challenges of addressing larger-scale social issues, as it undermines the thoughtfulness, social cohesion, and focus needed to effectively confront problems, such as global warming. What Wrenn doesn’t say—but which he would certainly be justified in stating, given what he has seen—is that the crisis of attention is the central problem, the one that must be resolved before any other social settlements and political compromises can be reached. The problem, of course, is that this crisis of attention is making a handful of corporations very rich.

Profiting Off a Degraded Public Sphere

The largest corporations that provide social media platforms—defined rather generically and crudely as “interactive technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks”—have accumulated enormous user bases from which they derive significant amounts of revenue related to advertising and marketing, data mining, and other services. What seems free to the user is actually a powerful, for‑profit marketing research engine that utilizes freely provided content to lure yet more users to the platform. In the course of getting users to think like little grandstanding brands seeking to maximize the clout that accrues to their small operations, social media corporations simultaneously reshape human communication while generating signifi­cant shareholder value. Another use for these platforms that is just now coming into view is as free training data for large language models (LLMs) like ChatGPT, tools which, without heavy regulation, are likely to have an even more deleterious effect on the landscape of electronic communication. Social media users are signing up to, unknowingly, generate very high-value content that is necessary to develop these tools which tech companies can then sell back to them.

Social media’s massive impact on human life has become increasingly evident, with 4.65 billion users worldwide, accounting for 58.7 percent of the global population. Many individuals rely on social media plat­forms for information, news, lifestyle tips, and decision-making, con­tributing to the enormous profits these companies generate through advertising and data harvesting. The scale of these platforms can be illustrated by looking at the world’s two largest economies, which between them account for around 40 percent of global GDP. In the world’s largest economy, the United States, 84 percent of the country uses at least one social media network. Meanwhile, over one billion people (over 70 percent of the population) in China, the world’s second largest economy, are social media users.

The top social media platforms generate billions of dollars in revenue, with Facebook leading at $85.96 billion, followed by YouTube at $28.8 billion, and others like WhatsApp, Instagram, and TikTok also raking in billions. Despite the vast profits, social media giants do little to address the problems they cause, such as the crisis of attention, decline in literacy, and mental health concerns that have been linked to excessive social media usage.

A July 2020 Pew Research Center survey revealed that a majority of Americans believe social media companies have too much power and influence in politics, and about half thought these major technology companies should be regulated more than they are now. Both Republicans and Democrats shared the sentiment that social media companies hold too much power in politics, with 82 percent of Republicans and 63 percent of Democrats supporting this view.

Despite these growing concerns about social media’s impact and power, the highly-publicized 2020 congressional hearings with Amazon, Apple, Facebook, and Google, which aimed to address competition in the tech industry, ended up being a mostly fruitless exercise in grandstanding. The hearings failed to produce concrete solutions or regulations to curb the negative effects of social media on society, thus high­lighting the challenges faced in addressing the immense influence these companies have on our lives.

Few government “solutions” have been proposed to address these issues effectively. Instead, many of the approaches involve ad hoc gov­ernment interference on social media platforms. Such interference often includes requesting the censorship of specific views or users and demanding that critical data be shared with authorities, as demonstrated by Elon Musk’s recent disclosure of reams of this material related to government-requested speech regulation on Twitter. Meanwhile, the most politically challenging choice of all—regulating social media plat­forms in the public interest, as public utilities—remains unpalatable with leaders in both parties as well as a broad swath of the notification-addicted public.

Deranged Public Utilities without the
Possibility of Public Oversight

The concept of regulating social media as a public utility suggests that major social networking sites, such as Facebook, Twitter, and YouTube, should be treated as essential public services and be subject to government regulation, much like electric and phone utilities. This idea stems from the uncontroversial and easily verifiable assumption that social media platforms possess monopoly power and wield broad social influ­ence. Given their enormous user bases and high revenues or market valuations, it is remarkable that these corporations—which are essentially the “hegemons of cyberspace,” in the sense that their app interfaces, not web browsers, are 90 percent of how online users now experience the internet—have evaded such treatment. Such intentional omission is a testament to the influence that the Silicon Valley lobby has exerted over the United States and other first-world democracies.

Social media platforms have become increasingly important for communication and information sharing in today’s interconnected world, the vessel in which our online selves seek identity validation, attention in the form of clout, and even revenue for the “work” we do, whatever that might be. While not essential for survival like traditional public utilities, social media has become a critical part of the modern world. As a result, social media—like power or telecommunications—should be considered a public utility and thus subject to appropriate regulation.

Treating social media platforms as public utilities would require government regulation of various social media websites and platforms such as Facebook, Google, and Twitter. This would help ensure that the rights of users are protected against risks such as viewpoint-biased censorship and deplatforming. Moreover, regulation could prevent these platforms from gaining monopolistic control, as has been done in the past with companies like AT&T and Microsoft.

By regulating social media platforms as public utilities, governments can ensure fairer access or limit access impartially, promote competition or simply throttle it, and prevent the concentration of power capable of threatening national security in the hands of a few companies. Insofar as they continue to function, social media websites, like other public utilities, should serve the public’s interest and be subject to extensive oversight that ensures transparency, fairness, and the preservation of democratic values.

The concept of regulating social media as a public utility emphasizes the necessity of a thorough regulatory framework that tackles the profound influence these platforms have on people’s lives. Before diving into the potential regulations, however, we need to explain how the nature of communication on social media fundamentally differs from oral communication and more traditional text-based literacy.

The Role of the Medium in Communication

The deleterious effects of social media—confusion in our public political discourse and the inability of ordinary people to focus on socially important matters—result not so much from a shift in our discourse (we will show that social media is not, in fact, a primarily discursive form of communication) but from a shift in the very type of communicative mode that social media enables. The medium of the message matters, both for our individual capacities and for the future of our society. But to explain why requires that we spend some time sketching out the basic elements of communication and the theory of what words are.

All communicative acts involve two distinct participant roles, speaker and addressee. Often communication involves a single speaker and a single addressee, but it can involve one speaker and many addressees, or multiple speakers (or one speaking on behalf of many) and many addressees. This is true even if the acts of speaking and listening do not occur at the same time, as is the case for us writing this article (in the past) and you reading it (in the present). One may even talk to oneself, in which case the speaker and addressee are one and the same. This binary is not just a theory dreamt up by academics, but is deeply embedded in the referential systems of every human language: The first person is the speaker, and may be singular (I) or plural (we); the second person is the addressee, and may be singular (you) or plural (you, y’all, you guys, etc.); and the third person is everyone else who is not directly involved in the speech act (he, she, it, they, the house, the unicorns, a mole on a frog on a bump on a log in the forest). Some kind of distinction among these roles is present in every language that we know of.

All communicative acts among humans use words of some kind. Since at least Ferdinand de Saussure in the early twentieth century, linguists have found it useful to describe a language’s lexicon (that is, its set of words) as a collection of arbitrary signs that pair form and meaning. The form of a word is its actual shape, whether that is a sound wave, a manual gesture (in sign languages), or a special arrangement of lines on a paper or on a screen. The meaning of a word is the semantic content that a particular form (by social convention) signifies. Form alone does not constitute a word. The written form “taller” is one word in English, where it pairs with the meaning “more tall,” and another word in Spanish, where it pairs with the meaning “workshop.”

In a communicative act, the speaker may intend to convey words, but he can only utter their forms. It is up to the listener to use her knowledge of the language to pair these forms with their appropriate meanings, and combine those into sentential meanings. As conveyed meanings become more complex and situationally embedded, this requires greater effort and interpretation from the addressee. One set of words coming from one particular person and at one particular time may mean something quite different from the same words from another person at another time (think of the difference between “I hate you” coming from a toddler who didn’t get his dessert versus coming from your boss).

Different communicative situations or modes have different effects on the participants involved. The medium and situation of the communicative acts shape both the content and the goals of the participants dramatically. We will go over the differences in the discourse mode, the literary mode, and the much more novel social media mode of communication. Finally, we will remark on how this changes when communication is not communication at all, that is, when it is between a person and an AI.

The Discourse Mode

The discourse mode is the most basic form of human communication, and is inherently collaborative. The roles of speaker and addressee reverse frequently and without any predetermined structure (the shape of the discourse emerges organically from the varied contributions of its participants). In this communicative mode, the cognitive and interpretive load that the addressee experiences does not last for very long, as he switches frequently to speaker and then back again. A good participant in discourse is aware of the natural role reversals of this mode, allowing his interlocutors their turns to speak, and providing the culturally ap­propriate amount of backchanneling (“mm-hmm,” “yes,” “I see,” and so forth).

The participants in a discourse can have many goals, and since the communicative roles are so fluid, the goals of the participants are often symmetric. They may be trying to get to know one another, come to a shared understanding of a new topic, or convince one another of something (each holding different opinions but sharing the goal of convincing their partner). Most often, people in discourse with each other are just doing the ordinary work of everyday life: cooking, clean­ing, communicating sufficiently to get work duties done or to purchase goods.

Absent literacy, most human communication occurs as discourse. One of the great leaps forward in the complexity of human societies has been the expansion of literacy to the masses, which enables a different kind of communicative mode and trains people in a different type of concentrated thinking.

The Literary Mode

The literary mode of communication is very unlike discourse primarily because participants do not change roles between speaker and addressee. A single speaker can go on at length, and while the addressee may interrupt this monologue (put down the article, throw the book across the room, go for a walk and come back later), this does not affect the speaker at all. The literary mode is thus much more cognitively demanding on the addressee than discourse, because he has no way to guide the communication and has no break in the cognitive load of interpretation. A skillful literary speaker (or author) is aware of the cognitive difficulties faced by the addressee and makes her prose clear and her point easy to follow, and a skillful literary addressee (or reader) is able to pause his need to address the speaker, and instead engage earnestly in understanding what is being presented to him.

Just as the participant roles in literary communication are asymmetric, so are the goals each person has. All goals related to some kind of joint objective, such as getting to know one another, are gone. The speaker may intend to convey information, or to persuade the addressee of some point. The addressee on his part may seek information about a topic or (in fiction) about an experience, or to evaluate or critique the speaker. The reasons for communication in the literary mode are quite different from those in the discourse mode, which is a direct effect of the asymmetries imposed by the medium.

Some of the features of the literary mode are present to a limited degree in preliterate societies as narrations, speeches, and plays. All of these, however, differ from literature by being more transient. If the sophist says something that contradicts a point he made ten minutes earlier, there is no way for the addressee to go back and check, except for what she has in memory. Our own imperfect recollection becomes the limitation on consistency in a preliterate society. The literary mode imposes a degree of permanence on communication: it is always possible for the addressee to go back and see precisely what the speaker said. These qualities are why historically it has only been through the written word that certain kinds of public reason have developed. Writing ap­pears to have been necessary for the emergence of science and higher-level mathematics (it is unlikely the scientific revolution could have hap­pened without the widespread availability of books via the printing press). This is not to suggest that discourse suddenly becomes useless (scientists engage in quite fruitful discourse within their fields of study) nor that preliterate societies lack reasoning capacities. We are only saying that discourse on more complex topics, such as those necessary for advanced mathematics and engineering, appears to be only enabled by a cultural environment in which the written word is present. It trains people in another way to use their linguistic capacity.

This importance of literacy was stressed by many of America’s founding fathers, who believed that mass literacy was necessary for a democratic society. Thomas Jefferson went on about this at some length to his contemporaries, as did the lesser-known Benjamin Rush, who extended the importance of education to female citizens as well as men. The theory of these founders was that a self-governing society could only work by enabling everyone to reason, and for that a high level of literacy was necessary. This emphasis on education is omnipresent in documents of that era. One of the earliest legal documents governing western settlement under the Articles of Confederation, the Land Ordinance of 1785 (on which committee Jefferson served), set aside a square mile of land in every township for education. Since the earliest days of the American state, the government attempted to inculcate in its citizens the cognitive tools for being both productive and (what they saw as) morally upright. Initiating the masses into literacy was understood as the prime component of this work.

For a very long time, our technological development seemed to be in a symbiotic relationship with greater levels of literacy. Exact figures are difficult to ascertain before the modern bureaucratic state, but a good estimate is a literacy rate of something like 5 percent in the year 1500 in Great Britain, rising to about 50 percent by 1800, with the basic literacy rate in the UK today somewhere close to 99 percent, or so high that more complex measures of literacy (the frequency with which one reads books, for example) are typically used.

The need for these more complex measures reflects a growing problem. Despite a population that has almost universal ability to read and write letters, the actual goal of earlier education efforts—immersing people in literature, so they develop the capacity for the complex, connected thought that this communicative mode inculcates—is not always attained. Despite this long era of increase in literacy, the most recent advances in computing (and our utilization of these technologies) are now creating a social environment which is not compatible with a mass literary culture. This is reminiscent of an earlier issue that arose around technological development in the mid-twentieth century. Though it was hotly debated at the time, we can now state confidently that television induced a lower rate of reading among children, both in terms of the quantity of their reading and their reading level. The new infor­mation technologies are so far beyond television in their consequences that whether they affect literacy (in the sense of a population frequently engaged in the literary mode of communication) is not even debated in the public sphere, and we have presented only a fraction of the increasing amounts of research showing the damaging effects these technologies have had on public literacy. It is clear that the literary mode is at risk of dying out among the general populace.

The Social Media Mode

Social media platforms have created a new medium of communication which is neither discursive nor literary. We should first point out that many of them are not even primarily textual. Social media platforms, even relatively early, text-heavy ones like Facebook, only began to take off with the introduction of images as a large component of content. Many later platforms, like Instagram and TikTok, are entirely image-centric and very text-light. And although people have compared social media to both literature (typically of the essay and newspaper variety) and discourse, it in fact resembles neither of these.

The literary mode of communication requires a large degree of cognitive effort from the addressee in the form of sustained attention and interpretive work. Social media lacks this structure for two reasons. First, posts on all of these platforms are short (sometimes extremely short), so there is not the space to develop complex argumentation, and the brevity of the posts reduces the cognitive load on the addressee. Second, the addressee can typically choose to interact, reversing the communicative roles. Combined with the heavy use of images in these social media platforms, they provide, to put it bluntly, easier and lazier ways of thinking than even the most basic forms of literature.

One might be tempted to think of these platforms as something like discourse, and contemporaneously this is the most common metaphor. This is also incorrect, however. A discourse involves a limited number of people communicating in a transient context that is not recorded for third parties. Though the roles are fluid, there is a relatively fixed set of people who may take on the speaker or addressee role. In the social media context, not only are messages recorded, but the set of people involved in the conversation itself is typically open-ended and usually anyone can review a conversation after the fact. The speaker does not know exactly whom she is addressing (this is like the author of a book), and also does not know who might enter into the conversation, taking the speaker role and making her an addressee (this is new). Social anxie­ties around “dogpiling” and “canceling” are a straightforward consequence of this new communicative mode in which the pool of participants is always unknown.

As this represents a new communicative mode, people engaged in it must also be doing so with new goals in mind. With the roles of speaker and addressee in flux (like discourse), everything recorded and perhaps intermittently available for addressees to look up (like literature), and the set of participants constantly in flux (something totally new), what we have seen develop as the communicative goal, worldwide, is self-marketing. This is not a quirk of the culture using these technologies. It is simply what the medium is best suited for.

Social media’s abundance of self-marketing is easily seen when it results in scandals. Whether it’s the “Liver King” (who marketed himself as a lifestyle brand, hiding the fact that he was taking heroically large doses of testosterone), Andrew Tate (who marketed himself as a tra­ditionalist and hypermasculinist, and was thrown in Romanian prison on charges of sex trafficking), Belle Gibson (a health guru who beat terminal brain cancer through dieting, except she never had cancer), all of these microcelebrities were selling not a product but a certain image of themselves. This extends to politicians as well, from Alexandria Ocasio-Cortez to Dan Crenshaw and Donald Trump, who use these platforms to curate and promulgate an image of themselves (a fighter for you, under siege by their enemies and yours), more than anything relat­ed to their policies. Marketing, after all, works best by eliciting strong emotional reactions, not by communicating complex information, and with algorithms that target users with the most emotionally stimulating content, these platforms provide the best possible environment. Just as masters of the discourse mode will facilitate a natural back and forth with their interlocutor and master authors make their points clear and engaging, master social media users will move between promoting their own brand and promoting their friends. The concerns about our social world becoming “post-truth” are also tied up in the self‑marketing aim of the social media communicative mode. Marketing has a famously tenuous relationship to truth.

Another place to witness the self-marketing nature of social media is in the representation of its users: the avatar. On platforms like Facebook and Twitter, this is a two-dimensional photograph or drawing, while on Instagram and TikTok, it might be a moving (but still curated) video. Once again, masters of the medium understand how this works, and knowing the role that the digital avatar plays in guiding people’s perceptions, manipulate their own accordingly. Users of social media actually are, in a sense, those flat representations on the screen and not the person behind the screen. By encountering others as an avatar, users learn to see themselves as an avatar (the image of the self that others see), and thus they become a sort of substitute self. They are a site of identity construction, and their manipulation is a source of social agony (should I post a black square? should I say I voted? do I dare to eat a peach?). Not only does this increase social anxiety, to no discernibly good result, this is also not the behavior of people engaged in discourse or literature. These are people engaged in marketing.

As we have already noted, the use of social media is increasingly linked to mood disorders, in particular depression and anxiety, and poor body image among young women (something known to Meta, the parent company of Instagram). When social media is properly understood as a communicative mode for self-marketing, these consequences are no longer surprising. The introduction of and subsequent increase in mass marketing in the twentieth century also led to greater rates of dissatisfaction. The particular effect it could have on women has also been a source of societal discussion since at least the age of television. Social media, despite its superficial appearance as a form of discourse, is actually a hyper-marketing machine, where every individual is her own brand, and every communicative act tends toward a sales pitch.

Pseudo-Communication and AI

The technological advances that brought us social media are now going further in creating new modes of interaction, one of which we believe is not even communication at all—yet may increasingly proliferate on social media in the coming months and years.

Large language models, of which ChatGPT is only the most recent (and most impressive) example, do not engage in communication in the linguistic sense of the term. A communicative act involves two or more participants who fulfill roles as speaker, addressee, or move between both. The participants have intentions and goals, and communicate using words. But models like ChatGPT do not use words in the Saussurean sense, because they have access only to word forms and no concept of meaning at all. This is not evident to the layperson interacting with ChatGPT, but this model was trained on lots of human-generated text, from which it logged the probability that certain strings of word forms would occur together. This human-generated text is often gathered without permission from or acknowledgment of the authors, frequently by simply crawling the web (OpenAI no longer publishes where it gets its training materials from), and it is not clear if this kind of unattributed use is entirely consistent with existing copyright laws (and if it is, perhaps those laws should change). Given the appropriate stimulus, a language model of this kind will spin out word forms based on the complex probability matrices it has learned from its input text. But nowhere in this process is there anything approximating meaning. This is why it tends to “bullshit” and get certain questions wrong.

Occasionally, engineers in this space will claim that they have squared the circle and don’t need more than the word form: the probability of one word appearing next to another word, the word’s neighborhood as it were, is the meaning itself (often called a word vector or word embedding). A moment’s reflection shows this is false. If this were true, archaeologists would have had no need for a Rosetta Stone to decode Egyptian hieroglyphics: The frequency with which the symbols appeared next to each other was already well-known and well-studied, but for some reason people seemed to still think they did not know what the symbols meant. Some years ago, one of us attended a talk where an engineer attempted to generate a dictionary out of large word embeddings. The results were fairly disastrous, with many entries full of gram­matical sentences that described things that were not definitions. This line of research appears to have been dropped since.

Language models like ChatGPT are stochastic parrots: parrots because they can only say something they have heard before, and sto­chastic because they combine sequences purely according to probabilistic rules. When a human being reads the output of this machine, generated by a prompt from the user, the human assumes that these are words which attach to meanings (and interpret them appropriately) and infers that they are engaged in a dialogue. This is an illusion. The machine is not producing words but only word forms. The human is the only entity assigning meanings here. The machine also lacks desires and intentions, and has no model of itself or its addressee, so it has no goal in the interaction. It is a statistical trick built to look like it is a person engaging in a communicative act.

It may be very good for the companies marketing ChatGPT and similar products to make them appear to behave like people. The owners of these companies have even made bizarre claims that their products are perhaps too good, so good that it’s dangerous and may usher us into a science-fiction artificial intelligence revolution, as in blockbuster films like The Terminator or The Matrix. Naturally, this cool and edgy dan­ger is not so severe that it has caused them to shut down access to their product, for which you can still purchase a monthly subscription.

Nevertheless, if these machines become a mainstay of social media, they introduce yet a further danger into the equation. These programs are currently being designed to deliberately fool users into thinking they are talking with a person (occasional disclaimers by the bot notwithstanding). This is not a necessary or natural development of the neutral progression of technology, but a consequence of the business model of online platforms, which require ever-greater amounts of screen and attention time from their users so they can harvest their data and sell both attention and user information to marketers. From the perspective of this business model, a chatbot that is made to simulate a social media user is the perfect tool to suck even greater time and attention out of its human user base. The perfect material to keep you glued to your screen doesn’t have to be located in content uploaded from another existing user, but can be generated with ever greater precision by the bot. The human in this conversation will get nothing out of this—no communication is occurring at all; a conversation with a chatbot is closer to a hallucinogenic trip than discourse—but the social media company will get more user time and more data to profit from.

Natural language interfaces have been a goal of technical development for a while, and applied judiciously will probably make our machines easier to use. Yet companies are not marketing these products as what they are—probability-driven tools that allow for a natural speech interface, with certain drawbacks and limitations—but are instead marketing and designing them to deliberately trick people into thinking they are engaged in communication with a person. It is false advertising, dependent on the ignorance of both consumers and regula­tors. This may be good for their business model, but it is rather less good for society if people approach a technological tool thinking that it is something more than what it is.

This marketing trick is also dependent on certain cultural obsessions in contemporary science fiction that older eras lacked. We are obsessed with the idea that anything that produces utterances to which we can assign a meaning is, ipso facto, like a person. But this is only a contemporary idea, driven in part by such tools (which were initially assumed to be an easy thing to accomplish) being beyond our reach for such a long time, and also driven in part by the extreme amounts of screen time certain segments of the population are currently engaged in. But a person is not a chatbot creating content on demand for a web interface. Looking at older ideas of the future, back before the rise of social media and all interactions taking place on a screen, we can see how peculiarly modern this idea is. No one on Star Trek, after all, was treating the ship’s computer like a person just because it responded to commands in natural speech.

Solutions: Logging Off and Touching Grass

As a first step toward mitigating the harm wrought by this welter of social media communication—of which AI presents only the latest and most ludicrous threat—we must urge our readers to “log off social media and touch grass.” This is hardly a novel insight, but it is a simple, personal-scale intervention capable of addressing some of the problems created by excessive screen time and social media consumption. While “logging off and touching grass” has a kind of superficial, “it’s-up-to-you” charm, it is also the least likely to have any sort of sustained success, as most individuals are unlikely to willingly disconnect from social media unless prompted by government intervention—“nudges”, partial bans, or complete bans—or other external factors, such as the mass exodus of their friends and family from social media platforms.

When considering why “logging off and touching grass” is so difficult, it is important to consider the historical context. The world we live in today is vastly different from the past, with one key distinction being the increasing amount of time spent interacting with screens, images, and simulacra, rather than the real world. This unprecedented shift has led to an altered perception of self and reality, enabled by advanced computing technologies that facilitate continuous engagement with unreal images.

The detachment from the real world contains echoes of the age-old debate between city and country living. City dwellers, often immersed in human-managed environments, tend to prioritize navigating social relations over tangible values, while country folks are thought to be more grounded in reality, unmediated by social forces. The rise of the new socially- and image-mediated experience of social media, however, transcends geographical boundaries and is accessible to anyone with the necessary technology.

In light of these developments, any call to “log off and touch grass” must underscore the importance of reconnecting with the physical world and distancing ourselves from the constructed realities that domi­nate our screen-based lives. While this solution may face resistance—not least from the “worse devils of our nature” needling us to log back on—it remains a necessary preliminary step in addressing the challenges brought forth by our increasingly virtual existence. We encourage individuals not just to disconnect from social media platforms but also engage in salutary real-life experiences, such as spending time outdoors and interacting with others face-to-face.

Locking Down to Stop the Spread of Unfettered Social Media Communication

While advice to log off may be well-intentioned and will indeed improve one’s well-being if followed, it often fails to account for the deeply ingrained habits and social dynamics that drive people to spend excessive amounts of time on social media. In a world where many friends, family, and even professional connections maintain a constant presence on these platforms, following such advice can be challenging, if not impossible, for many users.

In light of this, a simple “nudge,” like government-mandated daily usage limits on social media platforms, could encourage users to disengage from the virtual world of branding, attention-seeking, and feeling rather than thinking in order to reconnect with whatever remains of their offline lives. By imposing reasonable and proportionate limits on the time users spend on these platforms, individuals will have a better opportunity to strike a balance between online and offline interactions, ultimately benefiting their mental and emotional well-being.

Far from merely “logging off and touching grass,” addressing the issue of excessive screen time requires a more structured approach. A comprehensive risk assessment system must be developed, considering factors such as size, influence, socially or artistically redeeming value, and potential for harm of the platforms subject to regulation. This approach would ensure that the imposed time limits are reasonable and proportionate to the risk posed by each platform, such as an hour for Facebook, 30 minutes for Instagram, etc. Other issues would likely have to be settled by whatever administrative agency is tasked with this regulation: how should direct messaging apps like Facebook Messenger or other closed-room discourse channels be regulated? Many of these apps and platforms are owned by the same parent company; would the parent company receive an overall time limit, or would its individual offerings receive separate limits—perhaps encouraging them to spin off multiple product lines to subvert these bans? These are questions that would require further research and live testing to answer thoroughly.

As a general matter, when establishing these usage limits, platforms would be ranked according to their level of social risk. Smaller online spaces, such as long-established community forums and small hobbyist or academic groups, would be subject to minimal regulation and liability for hosted content. On the other hand, large, “Wild West”-style plat­forms like Facebook and Twitter, which have a significant—and largely negative—impact on public opinion and behavior, would require more robust oversight. Although size would be an important indicator of risk, other factors, such as the platform’s potential for fostering extremism or stupefying its user base (both of which could be assessed using the basic tools of social science research, the results of which we summarized in the introduction), would also be considered.

Once a platform is categorized as sufficiently risky, it should be regulated at the system or design level. Lawmakers would determine the reasonable and proportionate systems required to reduce risks, such as online harassment or foreign interference in political processes. Plat­forms would face enforcement action and potential sanctions, including fines or criminal charges, if their systems were deemed inadequate. On the other hand, platforms with certified adequate systems would enjoy immunity from individual user lawsuits.

To implement these usage limits, the government would approve or set the time limits, and platforms would be required to enforce them. Strict verification techniques, such as phone numbers or social security numbers, would be employed to prevent users from maintaining separate accounts to bypass the system.

This form of regulation—system-level oversight graded according to social risk and focused on outcomes—ensures that the regulator would not interfere with operational decisions or content moderation. Plat­forms would retain the freedom to make mistakes, as long as their overall systems were adequate. Furthermore, this approach would give previously noncompliant platforms an opportunity to innovate in order to meet democratically established goals, creating new interfaces, algo­rithms, and possibly even business models that prioritize reducing social harms over amplifying them.

Bans, Both Targeted and General

Assuming that the daily time limits on social media usage do not effec­tively reduce the risks associated with excessive screen time—and we are cynical sorts who would not expect them to—a more stringent approach would involve full shutdowns or bans of the aforementioned riskiest players in the social media landscape.

The simplest and most common approach is for governments to blacklist certain IP addresses of disapproved websites by controlling internet service providers (ISPs). When a user requests access to a site, surveillance computers monitor your request and check it against a list of blacklisted IP addresses. If a user attempts to access a forbidden site, the ISP drops the connection, causing it to fail. For example, in China—which for good or ill is the world’s leading proponent of all-encompass­ing internet regulation—international-gateway servers control the flow of internet information in and out of the country. Requests to banned sites are intercepted by these mega-servers, which then interrupt the transmission by sending a “reset” request to both your machine and the destination. Consequently, the connection hangs up, preventing access to the desired information. If the target website is hosted on a shared hosting server, all sites on the same server will be blocked, even if they are not targeted for filtering themselves.

Some countries also mandate that individual personal computers include software to filter internet content. In China, all PCs must be sold with software that allows the government to regularly update these computers with an ever-changing list of banned sites. This technique is also commonly used in the United States to set up filtration systems in libraries, schools, and public internet cafes—and could perhaps be extended to cover all commercially distributed phones, tablets, and PCs, should ISP-based bans prove largely ineffective (though this is an admittedly quite extreme, even authoritarian measure, a sort of “break-glass-in-case-of-fire” fallback only to be used if other, less intrusive means of suasion prove unavailing).

A case study that warrants our attention is the ban of the Russian social media platform VKontakte (VK) in Ukraine, as discussed in Yevgeniy Golovchenko’s 2022 paper. There, the Ukrainian government imposed a ban on VK in 2017, and despite technical workarounds like VPNs that allow users to access the platform (which still add friction by slowing down connections), there was a significant reduction in overall usage. This suggests that even if bans can be circumvented, they still reduce overall consumption, similar to the effects of alcohol prohibition.

Golovchenko’s research revealed that both pro-Russian and pro-Ukrainian users continued to log in to VK after the ban, but their online activity decreased. Political signaling played a role in this reduction, with users considering using services provided by the “aggressor state” as unpatriotic and a potential threat to national security—analogous to how users informed about the risks and profiteering of America’s pow­erful social media corporations might engage in similar signaling. The fact that polling already aligns with concerns about the excessive power and influence of social media corporations suggests that, while not without resistance, implementation of bans might be less painful than expected.

The Ukrainian case demonstrates that even if users can technically bypass censorship, bans still have a substantial impact on platform usage. The effects of censorship cannot be evaluated solely based on the technical ability of the population to bypass it. Instead, practical and pragmatic considerations may play a more significant role in users’ response to online censorship than their political affiliations.

Regulating Social Media in Defense of Discourse

We are not advocating for the complete abolition of internet media, or even for its devaluation as a form of social discourse. After all, social media initially connected us in ways we never imagined, giving us the means to rapidly share information and experiences across the globe. One vital function of social media has been to expose the absurdities and injustices of our shared condition, allowing for at least the possibility of a better understanding of the world we live in—though certainly, as the dismal failure of everything from the “Arab Spring” to the 2020 BLM protests has shown, it has provided no means for improving them.

What social media can never do for us, however, but what we must somehow find a way to do for ourselves, is provide a socially constructed space in which this new technology can be useful and beneficial. We must replace the superficial optimism that social media corporations and their users will self-regulate with the determination to see that our elected representatives adopt prudent regulations. Our government has risen to this challenge in times past, such as during the heyday of the New Deal—proving that it is possible for a democratically-elected gov­ernment to bring bad market actors to heel before the damage they are causing to our commons becomes irreversible. This determination calls for a willingness to legislate based on our best conjectures, even if such conjectures can never be proven absolutely. It demands that we continue to strive for genuine reforms to the problems that plague civil society and now threaten even the language that binds us together.

In recognizing social media’s limitations and pitfalls, we should be left with a sense that we stand with one foot over the edge of an abyss; below us is the pit of unreason, the degradation of our very tools for communication. It would be irresponsible to avert our gaze, laugh under our breath, and hope or pretend things will get better on their own. Instead, we must collectively rise to the challenge imposed by these new technologies, and once more craft for ourselves and our descendants a society that operates for the common good of all.

This article originally appeared in American Affairs Volume VII, Number 2 (Summer 2023): 171–91.

Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink