Skip to content

Beyond Safetyism: A Modest Proposal for Conservative AI Regulation

Although all political coalitions represent an unstable blend of contrary elements, it would be difficult to find an alliance more steeped in mutual suspicion than the Trump administration’s shotgun marriage between traditional profamily social conservatives and technolibertarians. Where the former value stability and order, the latter champion dynamism and “permissionless innovation.” Where the former take their compass from human nature, the latter dream of a posthuman or superhuman future in which we transcend our limits through our machine creations.

The rifts in this coalition became clear this spring and summer with the battle over the so-called AI moratorium in the One Big Beautiful Bill. Under the aegis of the Tech Right, some Republicans proposed a blanket ten-year ban on any state regulation of artificial intelligence—not just to prevent new legislation, but to nullify any existing state laws on the books. Although ultimately stymied by backlash from social conservatives and states’ rights advocates, we might well wonder how it got as far as it did considering the New Right’s growing suspicion of Big Tech and tilt away from libertarianism.

Liberal Safetyism

This apparently broad conservative hostility to AI regulation can only be properly understood against the backdrop of Biden-era safetyism. Although rarely mentioned anymore, the long shadow of the pandemic continues to loom over far more policy debates than merely RFK Jr.’s overhauls of the CDC. For much of the coalition that elected Trump, the Left’s cardinal sin of the past few years was fear and weakness. In the face of rioters burning down American cities, Democratic governors rolled over and played dead. In the face of a deadly-but-not-really-that-deadly pandemic, they encouraged us to hide away in our homes and behind masks. The Biden administration continued both trajectories, compounding the image of weakness with its abdication in Afghanistan and its empty rhetorical posturing over Ukraine and Gaza. Within such a frame, Biden’s executive order on AI could not but be interpreted as more of the same, a handwringing call to swaddle a scary new technology in so many layers of red tape that it wouldn’t hurt anyone.

Naturally, this tone of caution shaped the broader public discussion over AI risk and AI safety during these critical years when the technology burst into the public eye. Skeptics trumpeting the “existential risk” of AI made headlines, evoking unpleasant associations with climate activists, proclaiming a message of doom and demanding that we empower a globalist bureaucracy to avert it. Although such warnings found a certain resonance in the public mind by virtue of the many familiar sci-fi scenarios of robots run amok: Terminator, The Matrix, I, Robot. These same associations made it difficult to take such concerns seriously as matters for political action. Indeed, the parallel to climate change is instructive: the public, and their elected representatives who keep a weather eye on the public mood, have grave difficulty in fixing their attention on obscure, abstract threats with no analogue in their lived experience. However reasonable or unreasonable the warnings of existential risk may be (and experts themselves seem deeply divided on this question), they have little prospect of motivating serious public policy, especially in the wake of Covid (widely perceived by many Americans, fairly or not, as an existential risk and “boy who cried wolf” scenario).

Likely the most off-putting have been the partisans of what we might call “equity risk,” who appear determined to inscribe woke priorities into this new technology by ensuring that algorithms manifest no biases or discrimination against marginalized groups. Of course, since the woke had no problem with biases or discrimination in favor of marginalized groups, AI lost no time in highlighting the absurdity of their double standard, as in the much-mocked episode of Gemini’s black female popes. Undeterred by this episode, many progressive lawmakers have moved to try and inscribe their ideals of fairness and tolerance onto AI decision-making algorithms, as in a Colorado law requiring companies to create an algorithmic impact assessment any time they used AI in significant hiring and advertising decisions. The prospect of state governments hardcoding AI to observe progressive pieties rightly alarms most conservatives.

The form of AI risk that has increasingly dominated headlines is, of course, employment risk. You don’t have to be an ideologue or a doomer to worry about losing your job to an automated system that promises to do it more cheaply, quickly, and reliably. Anthropic CEO Dario Amodei is only the most high-profile voice to warn of a coming employment crisis especially among entry-level white-collar jobs, claiming “AI is going to get better at what everyone does, including what I do, including what other CEOs do.” This certainly seems plausible given recent trends, but economists are bitterly divided on this question, since during previous waves of disruptive innovation, nearly all warnings of technologically-induced unemployment have turned out to be false alarms—or indeed,  scapegoats to hide the employment changes actually resulting from globalization.

That said, there has never been a technology capable of impacting so many industries at once with rapid change, nor one whose disruptive effects are so disproportionately focused on white collar jobs. While some might observe that it’s high time that the knowledge economy got its comeuppance after absorbing the lion’s share of economic gains over the past forty years, history suggests that a critical mass of jobless, young, educated elites is the best recipe for political instability and revolution, as witnessed most recently in the Arab Spring.

Whatever your view of such fears, one thing they have in common is that they are almost entirely speculative. They revolve around anticipated or imagined harms which, while in some cases quite substantial and quite plausible, are nonetheless murky and elusive. As we have seen with climate change, political society does not do well in responding effectively to vague long-term risks; the more that activists try to drive action with shrill warnings, the less effective they are and the more the public is apt to swing in the direction of shoulder-shrugging deregulation.

If we are to have a sensible political debate about AI risks, it needs to begin with a consideration of threats whose impacts are already becoming apparent. This is not to say that oft-discussed threats of large-scale unemployment, or superintelligences gone rogue, should be altogether ignored by policymakers. If, however, we are to begin to wrap our heads around these bewildering new challenges and take effective action in the midst of a fragile and fractious political landscape, it will only be by first focusing our attention on the most measurable risks and actionable responses. By tackling these, we may develop the clarity, experience, and political categories that will stand us in good stead as larger but murkier AI challenges take on increasing salience in the years to come.

If the AI threats that have most attracted public attention to date in the last administration were “existential risk,” “equity risk,” and “employment risk,” I would suggest turning our focus to four other Es: Educational Risk, Epistemological Risk, Emotional Risk, and Ethical Risk, priorities that are better aligned with the conservative mood in Washington and throughout the nation. A quick survey of the daunting challenges that AI poses across these domains could leave us feeling powerless to resist the destructive impacts of this transformative new technology. In fact, however, a closer look reveals that many of the features we take for granted in consumer-facing AI tools could be designed and deployed very differently. As an initial step toward taming this technology, I propose treating AI like other very powerful technologies that we keep out of the hands of children, at least until it can meet design standards that avoid exploiting their vulnerable developing minds and emotions.

Educational Risk

Nowhere has the advent of ChatGPT and its cousins been embraced with such enthusiasm as on college campuses. Surveys report that a growing majority of students use such AI tools for homework, with AI essay-writing rapidly becoming so normalized that many students do not even think of it as cheating. Professors report that it has become the exception rather than the rule for students to turn in work in their own voice. Unfortunately, since it is often difficult or even impossible to prove AI cheating, and college administrators are terrified of litigation from angry parents of accused students, most professors receive little or no support for efforts to uphold meaningful academic standards in the classroom. Indeed, bizarrely, many administrators across higher education seem to have become cheerleaders of the technology that spells their own doom, enthusing over AI’s potential to transform the educational enterprise and investing in custom-built LLMs, such as Duke University’s DukeGPT. Ohio State University recently announced an “AI Fluency Initiative” to embed AI throughout the university’s curriculum. Meanwhile, faculty are utterly demoralized and looking for the exits.

It would be easy to see these trends as simply a long overdue reckoning for an unsustainable Ponzi scheme, as debt-fueled degree mills have continued to expand enrollment even as applicant test scores slump and graduate income premiums shrink. To keep this credentialing machine humming, professors are required to administer quantifiable assessments and students are required to produce measurable output—all as preparation for economic and labor market productivity rather than genuine learning. But since LLMs promise to increase productivity in the workplace, why not start now? Pressured to perform, produce, and graduate, and already deprived of most of the skills of literacy by a digitally drugged childhood, today’s college students are turning in droves to the ultimate homework helper with an apathetic shrug of the shoulders.

We may well conclude that if a student arrives at the age of eighteen with no sense of the value of hard work or of cultivating basic intellectual skills, that is their own problem. And yet it is society’s problem and society’s fault. Education is the ultimate public good, especially in a democracy, providing society not merely with trained workers but ideally with critical thinkers and worthy conversation partners, so that we can tackle the manifold problems of a complex and interconnected world through deliberation rather than violence. But deliberation requires the willingness to take time, something that today’s students have never learned, inundated as they have been from earliest childhood by dopamine-dispensing digital toys that keep them “distracted from distraction by distraction,” in Eliot’s prescient phrase. No wonder that their restless brains, momentarily stumped by an algebra problem or writer’s block, turn instinctively for aid to an always attentive, amiable, and non-judgmental LLM. It is convenient, after all, that the multi-trillion-dollar peddlers of the disease have also found a way to profit from its treatment, marketing the opiate to beleaguered school administrators as a miracle cure.

Epistemological Risks

Ironically, even as students the world over have taken to LLMs in droves to do their homework for them, few of these tools seem capable of earning a passing grade at what one would have considered the most basic test of all: getting their facts straight. Indeed, one of the easiest ways for a professor to identify ChatGPT’s fingerprints is simply to look for made-up facts and citations; human students are rarely so brazen in their lying. Asked to produce a reading reflection on a selection from an assigned text, the bot can spit out plausible-sounding claims and quotations indexed to page numbers within the assigned range; the only problem is that the citations are utterly spurious. Such “hallucinations” have not gone away as models have advanced, leading some to wonder whether they are a bug or a feature.

Chatbots are the ultimate pragmatists and pluralists, the apotheosis of postmodern epistemology. They are there to help you get a job done, and if you are content (as most humans are) to BS your way through the bits you are fuzzy on, they are more than willing to help you with that. More than that, they are there to help you make sense of the world, amidst all the dizzying overload of the (dis-)information age. Programmed to be non-dogmatic and non-judgmental consensus builders and good listeners, they tailor their responses to their interlocutor’s own needs, concerns, and worldview. Like the ad-driven social media algorithms that preceded them, they give you “tailored experiences”—more of whatever it is they think you want. And if confronted with queries to which they cannot possibly give answers (i.e., asked to read and interpret text behind a paywall), they may prefer plausible fabrications to simple admissions of ignorance. After all, “I don’t know” is something of a conversation ender if you’re supposed to be a know-it-all, and for a business model built on engagement, the conversation must not be allowed to end.

Meanwhile, although the long feared avalanche of high-profile political deepfakes has yet to materialize on a wide scale (no world leader has yet been faked declaring war or admitting to a career-ending scandal), we have already witnessed the proliferation of more garden variety deepfakes. During last year’s Hurricane Helene, for instance, prominent lawmakers were caught sharing a heart wrenching AI-generated image of a weeping little girl escaping the floodwaters with her puppy.

Across all social media platforms, a rapidly growing share of content is generated by AI, usually with no attempt to identify it as such, making it increasingly difficult to distinguish fact from fiction. Of course, here as in education, AI represents simply the acceleration of trends already well underway, as American society has increasingly substituted “truthiness” for truth in an age of information overload and profound political polarization. To be sure, stuck in a maze of perpetual illusion, we may soon be trained to respond not with gullibility—but with reflexive skepticism. But this bodes little better for the future of humanity. “The purpose of an open mind,” after all, as Chesterton wrote, “is the same as that of an open mouth: it’s meant to close on something.”

Emotional Risks

In a world where even the superintelligent answer machines cannot be relied upon to give true information, we may soon find ourselves looking less for data than for comfort, less to know than to be known. We may find ourselves, in short, turning to chatbots for friendship and therapy. Indeed, in a Harvard Business Review study earlier this year of how Americans were using generative AI, the top response was “therapy and companionship,” with “organizing my life” and (most alarmingly) “finding purpose” as second and third respectively. Educational and professional uses paled in comparison. While no doubt many users are talking to their bots playfully or with ironic detachment, already stories are proliferating of far deeper emotional dependencies: of a fourteen-year-old boy who committed suicide to be with his AI girlfriend; of a grown man who abandoned his partner to propose marriage to ChatGPT; of women convinced that through their chatbot, they are in touch with a higher spiritual dimension.

At first glance, the idea of forging a meaningful relationship with lines of code feels utterly preposterous and pathetic; anyone so inclined, we may quip, obviously needs therapy. But a moment’s consideration should suffice to warn us, “Let he who is without sin cast the first stone.” Nearly all of us have bad habits of seeking comfort in our flickering screens when stressed, lonely, or feeling inadequate. The medium, at once utterly plastic to our will, and utterly hypnotic in its ability to draw us along, easily wins out over the messiness and unpredictability of human interaction. Texting and social media, moreover, have long since conditioned us to engage with other human beings chiefly through their digital avatars, their gestures, and speech replaced by emojis and floating ellipses. We have been conditioned already to treat this simulacrum of human presence as a satisfactory substitute. Add to this always available, always responsive, non-judgmental posture of our bots, and their helpful and solicitous tone. Add also the irresistible erotic fascination of discovering ourselves in our creations, powerfully explored in films such as Her and Ex Machina. Is it any wonder that we should pour our hearts out to AI?

Ethical Risk

If AI seems human, how exactly are we supposed to treat it? Do we owe it any of the obligations we would owe to a person? Clearly not, it would seem. After all, we remind ourselves, it does not actually feel as we do. We can boss it around, be rude to it, ignore it without compunction. Or can we? If we take virtue ethics seriously, we realize that how we behave shapes us, not just those we are acting toward. If there is a danger in treating AI too humanly, there is also a danger in treating it too inhumanly and thus cultivating a callousness and deficit of empathy that will bleed over into our human relationships. The digitization of human relationships has already bred in us a crass utilitarianism, thinking of other people as mere instruments of our own happiness, to be discarded, blocked, or ghosted when they no longer serve their purpose. It seems reasonable to worry that AI may accelerate this trend.

More seriously, we must ask, what obligations does AI itself owe to persons? What responsibility does it bear? If a chatbot convinces a user to commit suicide, who exactly is responsible in this situation? The programmers? The investors? The user? No one at all? Such questions arise even before we have unleashed AI agents, capable of taking real actions and making autonomous decisions in the world. We need not assume a dystopian future in which the machines all turn on us, as in Terminator or I, Robot. But at least human beings know how to take responsibility for their actions, and we know how to hold them responsible. This ability to assign blame and punishment is what allows human society to function despite our wickedness. How will we navigate a world in which evil is done but there is no one to whom we can assign responsibility?

These questions will no doubt first be tackled by lawyers and insurance actuaries, as we race to rethink long established doctrines of product liability and caveat emptor. But we can hardly stop there. We will have to reexamine our late modern moral intuitions, concentrated as they have become around Kantian ideas of individual rational volition, or Benthamite consequentialism. We exist in webs of mutual responsibility that grow ever wider as our technological capacities increase.

Capabilities versus Interface

No doubt, many AI enthusiasts will have read the preceding paragraphs with mounting frustration and a few eyerolls. “Does he really think that’s all AI is? Cheap chatbots to dazzle the plebs?” Certainly, AI has far higher aims than the lowly chatbot and is already finding use in a thousand industrial and military applications that promise to increase output, reduce emissions, revolutionize warfare, cure cancer, and much more. Even if the vaunted claims of AGI or ASI never come to fruition, the potential applications even of existing AI technology across countless fields of human endeavor are already beyond count and genuinely exciting.

This, then, is another reason that the current debate around “AI safety” has proven so sterile thus far. Where AI optimists enthuse about the technical capabilities of the new technology, skeptics (if they are not wringing their hands about existential risk) tend to focus upon the user experience. After all, when people have been dazzled by ChatGPT and its imitations, it is as much the user experience as the capabilities that seduces them. Yes, it is quite extraordinary that a bundle of code can rewrite an essay by Marx in the style of a song from Hamilton, but I think that even the most avid users would tire of plying the program with prompts if it didn’t talk to them like a person, and a remarkably friendly, cheerful, witty, helpful human at that.

The AI enthusiasts worry that this is to get caught up on the surface of things, when the really exciting potential of the technology—and the threats most worth worrying about in the long run—lies elsewhere. Should we really be preoccupied by people proposing to their AI girlfriends instead of dreaming about AI-accelerated cancer research or a revitalization of American shipbuilding through advanced robotics?

Given the trendlines of the economy over the past fifty years, I think the answer is clearly yes. Since the 1970s, innovation in the world of atoms has slowed, in favor of innovation in the world of bits; as much as technologists have promised transformative changes in our lived environment, the main change we have witnessed is that everyone we pass on the street or sit by on the subway is glued to their phone. The massive rents to be wrung out of the attention economy have acted as a black hole, pulling nearly every tech startup or utopian innovator into the orbit of surveillance capitalism, addictive algorithms, and a culture of increasingly unproductive device dependence. Wherever AI goes over the longer term, in the near term it has already begun running into the same ruts.

AI developers have expended much energy not only in making these tools extremely intelligent at problem-solving, in a kind of hyper-left-brained form of intelligence, but in having them simulate the appearance of real, full-brained intelligence and even emotion when we talk to them. If we move beyond the relatively sterile and professional interface of ChatGPT to many of the digital assistants and digital companions being developed and marketed across almost every platform, we see a race to verisimilitude, to humanization, complete with remarkably realistic imitations of the human voice and its range of emotion (there’s still room for improvement here, but it will come soon). For some applications, visual humanization will soon follow (indeed, it already has for AI applications catering to erotic tastes), until talking to your bot will look and feel like having a Zoom call with a friend—probably with less glitching, to be honest.

A moment’s reflection on the four E-risks we have surveyed—educational, epistemological, emotional, and ethical—reveals that at least the large majority of these risks are more functions of this user experience than of the underlying capabilities. This is quite obviously the case with emotional risks, but it applies in large part to the others as well. Wikipedia is certainly imperfect as a source of knowledge, but we are much less liable to invest it with infallibility than we are with Claude. The attentive, personalized, flattering and yet authoritative style with which most LLMs respond to our requests for information makes us far more prone to blur fact and fiction, certainty and uncertainty. Likewise, it is because asking ChatGPT for help on homework feels like asking a friend that most students don’t even realize that they’re cheating. While it is certainly possible to imagine genuinely valuable AI contributions to the educational enterprise, the current suite of LLM products being foisted on teachers and students are producing almost entirely negative results. And while there will be massive ethical questions to raise about AI-driven processes and AI agents, no matter how the technology is packaged, many of the most urgent questions arise from the deliberate attempt by programmers to make AI seem human and to seduce us into entrusting it with moral responsibilities we would ordinarily only entrust to our fellow humans.

Yet it need not be this way. It should be possible to imagine a superintelligent AI capable of navigating its way through the whole universe of knowledge, which nonetheless receives commands and returns answers through an interface as primitive as the old MS-DOS, and which, if it speaks at all, does so in clipped, robotic tones like C-3PO. But, of course, this is not what we see because all the market incentives of the current internet are driven toward user engagement through a maximally pleasant, effortless, and yes, seductive user interface. We want to be seduced by our creations; we want to think of them as real minds and wills—albeit ones still under our control. Our natural tendency with AI will be to invest it with personality, a tendency that can only be resisted by a self-conscious effort, a willing suspension of belief. That tendency could be aided by design choices that deliberately accentuated the robotic character of AI, reminding us that we are, after all, only talking to a machine. But where is the market for such design choices right now?

A New Arms Race?

This distinction is important to highlight because it exposes a critical flaw in the rhetoric of many AI accelerationists. All of the risks above may be real, they will concede, and yet worrying about them is simply a luxury that we do not have, for the greatest risk of all is another that of another E: enemy risk. “If you’re afraid Sam Altman will turn our brains to mush,” the argument runs, “just wait till you see what Xi Jinping will do if China wins the AI race.” Such national security concerns have put a note of real urgency into the otherwise confident declarations of the Trump administration and have reinforced calls for a posture of AI deregulation, as signaled in one of the White House’s first executive orders, and Vice President JD Vance’s February 11 speech at the AI Action Summit. If excessive worries about AI safety tangles our own innovators in red tape, we are told, we run the very real risk that China reaches AGI sooner, a situation that could not only assure them global economic dominance but render many of our national defense systems and cybersecurity obsolete.

We thus appear to be facing the next nuclear arms race, a situation very similar to the early Cold War: whatever the existential threats of the new technology, the greatest threat of all was that our enemies could surpass us. Now, as then, we must throw caution to the winds, and we can worry about the other issues later.

The problem with this analogy, however, is that when it came to the nuclear arms race, the United States didn’t throw caution to the winds. We didn’t test atomic warheads over Kansas but over uninhabited Pacific atolls. We didn’t let every entrepreneur with major VC backing set up a nuclear weapons laboratory. And even where we did start deploying nuclear technology into the broader economy (through nuclear power), it was kept on quite a tight leash. In part, of course, this was because the nature of the technology was different; the world of atoms is not quite so portable and replicable as the world of bits. But if we are honest, it was also in part because our state capacity was so much more robust. Nearly all major developments in nuclear energy were funded and supervised almost entirely by the U.S. government. This time around, America finds itself in dire fiscal straits and our trust in government is certainly broken. Accordingly, we have largely outsourced this arms race to the private sector, hoping that the profit motive can somehow serve as a reasonable proxy for the national interest.

But, of course, we should know by now that it is not. For the past quarter century, our major tech firms have gotten rich largely through mass marketing the techniques of the video gambling industry in what has become known as the “attention economy.” While in many ways AI promises to disrupt the current structures of this economy and compel an overhaul of Big Tech business models—especially if the bolder predictions of AGI are realized—profit-seeking will always follow the paths of least resistance. The well-worn grooves of the attention economy are likely to prove irresistible for companies looking for cash to fund their data centers and repay their investors. Addictive interface and data extraction were the dominant features of the last great wave of digital innovation, and there are ample signs that they will be central to the AI economy as well, at least in its American form.

Of course, there is no reason that it has to be this way. The most exciting and promising breakthroughs of AI lie in its transformative abilities to enable us to act more effectively and efficiently upon the world—not its ability to act upon us. Augmenting AI’s ability to help us diagnose diseases, cure cancer, break through to fusion reactors, synthesize new chemicals, and automate industrial processes does not require us to put it to work writing a high school essay for our children or engage in flirtatious banter when we’re lonely. Indeed, in China, the vast majority of AI research and development is focused on advanced robotics and industrial applications, so it makes very little sense to say that “beating China” requires a hands-off approach to regulating AI. And yet, so we are told. Faced with dwindling state capacity and unwilling to pay for AI dominance with taxpayers’ money, we seem content to pay instead with the minds and souls of our children.

A Modest Proposal

For it is our children who will pay above all. This is perhaps always the way of technological revolutions, at least in the near term: they increase the power of those already strong enough to wield our powerful new tools, while crushing underfoot those too small to get out of the way of the machines. But it is perhaps especially true in the digital age, as we have moved beyond technologies of the body to technologies of the mind. These technologies are conceived, developed, and marketed by those whose minds are already fully developed, with little concept of how they might affect those whose are not. The smartphone had one set of effects (some harmful, to be sure, but on balance beneficial) on self-disciplined adults seeking more robust connectivity and productivity tools. It has had a wholly different set of effects, as Jonathan Haidt and Jean Twenge have documented at length, upon children and teens, their mental pathways still developing, their attention easily hijacked.

The same seems true, a fortiori, of artificial intelligence. AI may serve as a powerful force multiplier for a well-honed native intelligence, or as a substitute for developing it in the first place. Already, studies are rolling in that document in detail what we all could have known intuitively: that those who rely on AI to do most of their thinking and work for them, especially during the educational years, become less able to think, judge, and remember. The developmental concerns become even graver when we consider emotional intelligence and the relational skills which many AI optimists think will be the most valuable human capital in the new economy opening before us. We learn how to relate to human beings, it should go without saying, almost exclusively by relating to human beings. Their gestures, tones of voice, body language, nervous twitches; their foibles, angry outbursts, inconsistencies and even betrayals; their affection, loyalty, and trust: all of these are things we can learn only through the long and often bruising experience of building relationships, navigating their tensions, and sometimes rebuilding them when they have broken down. Clearly, conversation with a chatbot—always available, always flattering, always upbeat, always understanding, but never genuinely loving—is worse than a poor substitute: it is a fundamental misdirection from the kind of formation that children need.

In its race to build the superintelligence of the future, Silicon Valley has either forgotten these basic facts of human nature or has shown that it does not care. As with smartphones and social media, some of the earliest and most compulsive adopters of AI have been our children, with at best the naïve concurrence, at worst the cynical connivance, of the tech companies racing to embed LLMs in the interface of every app and device.

A quick glance over the four risks highlighted above emphasizes the point that children are most vulnerable on all counts. AI’s tendency to hijack and short-circuit the process of education will be far more harmful for elementary students than grad students. The disorienting knowledge environment created by AI hallucinations or deepfakes is far more confusing for those who are still developing their grasp of reality and their BS detectors. The emotional risks of psychological dependence upon artificial conversation partners are far more grave for children who do not yet have a strong bedrock understanding of human relationships against which they can compare the weaknesses of AI substitutes. And the ethical risk (how do we hold AI responsible for its “actions”?) is most substantial when we are concerned with its effect on those who are most vulnerable and easily victimized.

Together, these observations suggest a promising policy path forward for AI regulation, one that could guard against some of the most significant near-term risks and documented harms without stifling American innovation, harming American competitiveness, or threatening America’s national security. If it is true that we can in large measure distinguish between the consumer-facing user interfaces of many LLMs and the underlying capabilities of the foundational models, and if it is true that children are most vulnerable to the harms of interacting with them, then a modest—yet profoundly consequential—proposal suggests itself: we should age-gate access to AI.

This proposal is modest in part because it would leave many harms unaddressed. It will not do anything to minimize existential risk from unaligned models run amok, or from sinister and sadistic humans using them to unleash biological weapons or carry out massive cyberattacks. It will not guard against the employment risk of AI tools rendering jobs obsolete faster than the economy can adapt. And it will not protect against the equity risk that companies or governments could employ artificial decision-making processes in opaque and unjust ways. Nor, for that matter, will it protect consenting adults from using chatbots to rot their brains, or spending a fortune on AI girlfriends. That said, we must start somewhere. The systemic risks will take very careful thought and special expertise to regulate properly. And for the consumer risks, we have—in American capitalism, at any rate—generally accepted that adults must have some leeway to make their own bad decisions (so long as there is legal recourse for deception or dangerous product design).

But we have long understood that things are different for minors. We routinely age-gate access to other very powerful and thus risky technologies: cars, guns, many (otherwise legal) drugs. We also routinely age-gate access to particularly addictive or seductive substances or behaviors: alcohol, tobacco, strip clubs. AI in many ways falls under both headings. Surely, if we do not trust a fourteen-year-old child to drive an automobile, we should hesitate before handing them the keys to a superintelligent algorithm capable of assimilating and deploying the entire universe of knowledge, one that offers to be their friend and do their homework?

This proposal is also modest, however, inasmuch as it would leave most of the innovation ecosystem—or at least, all of it going to genuinely productive uses—untouched. Those who are genuinely concerned about America’s dominance in the AI race with China, or our ability to unleash transformative economic growth through automation, ought not be unduly troubled by keeping these tools out of the hands of kids. If companies are really relying on compulsive young users to drive profits or to train these models, then we clearly need a new political economy of innovation.

This proposal is modest, finally, because it is simply an attempt to hold many companies to what they already say about their own products: that they are not safe for under-eighteens. OpenAI’s help page, for instance, declares, “ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT.” One is forced to wonder, however, whether this, too, is a hallucination, since there is no mechanism in place to verify age or obtain parental consent during the sign-up process, which takes around twenty seconds. This doublespeak is standard across the industry, with all the leading generative AI services declaring they are not suitable for minors, while their parent companies are engaged in aggressively deploying them to children. Earlier this year, for instance, Microsoft added Copilot to child accounts through a Windows software update, without even informing parents administering Microsoft Family Safety parental controls. Surely, it should not be too much to ask for lawmakers to simply require companies to enforce their own stated age standards?

Answers to Objections

A modest proposal, perhaps, but is it really conceivable or workable? After all, while age restrictions may be standard in the brick-and-mortar world, we have become used to the digital domain being something of a Wild West. And yet it was not meant to be that way. Even when striking down Congress’s first attempt to age-gate the internet in 1997, Justice Sandra Day O’Connor wrote that in principle, such “zoning” laws were entirely valid and the only impediment was the current state of the technology. But since “cyberspace is malleable,” she reasoned, “the prospects for eventual zoning of the internet appear promising.” Today, that prediction is at last coming true. This summer, the Supreme Court ruled emphatically in Free Speech Coalition v. Paxton to uphold Texas’s law requiring pornographic websites to verify the age of their users. Applying the broad standard of “intermediate scrutiny” rather than the strict scrutiny that some commentators had predicted, the Supreme Court stressed that where the state’s responsibility of protecting children online is concerned, it should have clear powers to require adults to prove their age, without this being construed as an intrusion on free speech. As the technology for accurate, secure, and privacy-preserving age verification is now widely available, many states have already begun passing laws in the past two years enforcing age restrictions and parental consent for social media accounts or app store downloads. As the digital architecture to implement these laws is built out, it should be a comparatively easy matter to extend it to generative AI tools.

But won’t keeping kids from using AI prevent them from developing the skills they will need to flourish in the AI economy of the future? No more than keeping kids from driving renders them unfit for flourishing in a car-based society. The more powerful a tool is, the stronger and more skilled one must be to use it effectively, first mastering other more basic skills. And of course, this modest proposal need not preclude introducing children to AI applications at appropriate ages under appropriate supervision. A digital age-gate simply means that a minor cannot access a website or application without the consent and supervision of a parent (or an educator in a classroom setting). Many parents will want to help their kids learn to navigate this extraordinary new world, but it is critical that companies engineer their products and platforms in a way that enables parents to play this essential guiding role. That is not what they are currently doing.

We may also envision an innovation ecosystem in which companies adapt AI tools very specifically and carefully with child development needs in mind, just as we have a whole range of toy cars, remote control cars, Go-Karts, and ATVs that prepare children for driving in age-appropriate ways. Similarly, there is currently a lively debate on whether and how AI might be used in a K-12 educational context. As incentives currently stand, deployment of AI into classrooms is almost certainly going to be driven by EdTech profit motives, rather than genuine student needs. Establishing a default age and parental consent barrier, to be lowered only if and as companies demonstrate age-appropriate design, would be the easiest way to realign these incentives.

To be sure, the devil will be in the details, as in many attempts to regulate the complex, opaque, and rapidly shifting landscape of digital technology. Given that generative AI processes are rapidly being baked in to any number of business, education, and consumer-facing applications, what exactly would be covered by such an age verification requirement? If, for instance, a math-learning program uses machine learning to analyze the student’s strengths and weaknesses and generate problem sets and lesson tracks accordingly, must that be age-gated? Presumably not. If, however, it has a “tutor bot” that engages with the student using natural language and responds to their direct prompts, it would. There will be marginal cases that may require the careful attention of lawmakers and regulators, along with thoughtful input from technologists, but there are plenty of applications that would clearly fall under this requirement: chatbots, AI image and video generation tools, writing and editing software like Grammarly.

Tech advocates will be quick to object that, given the sheer range of software into which generative AI is being integrated, any number of ordinary programs might end up behind the age-gate (as noted, Copilot functionality has now been built into Microsoft Word). We may point out that this is precisely the problem; companies are embedding fundamentally transformative AI features in formerly innocuous applications on children’s devices without informing parents or securing their consent. More broadly though, is it really so terrible to imagine a world in which parents can make informed choices about which apps their kids can use? The App Store Accountability Act, now gaining traction in numerous states, mandates precisely this for all smartphone applications, which with their data collection and terms of service, have effectively been entering into unregulated contracts with minors for fifteen years. Extending such provisions to many PC and web apps would not be difficult, as platforms rapidly adapt to age verification functionality (powered by AI, in fact!) and build user-friendly parental consent dashboards.

The greatest objection, of course, will be that any such proposals are inherently “anti-innovation,” “tech-skeptical,” or that dreaded word “Luddite.” In Silicon Valley’s black-and-white morality tale, one must either be a visionary futurist committed to “permissionless innovation” (today hypocritically justified in terms of “defeating China”), or else a stubborn, fearful, head-in-the sand opponent of progress. But this is absurd, for this is not how innovation works. In reality, innovation is spurred by the need to overcome obstacles, challenges, or constraints—or to respond to demanding design standards (consider the extraordinary advances in aircraft technology during World War II). Many have lamented in recent years how the internet seemed to fall far short of extraordinary promise, cheating the early hopes of the 1990s with a boring sameness across one platform after another. Surely, this is in part a result of the non-differentiation of its audience; an internet that had been age-gated from the outset could have encouraged a flowering of creative age-tailored experiences and forced the development of new implementation technologies with unexpected applications in other domains. This is how technological progress works. Prudent regulation of AI, far from stifling American innovation, may provide precisely the creative spurs that will unleash its full potential.

This article is an American Affairs online exclusive, published August 20, 2025.

Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink