Skip to content

The Myth of Internet Exceptionalism: Bringing Section 230 into the Real World

As it turns out, the internet is not that exceptional after all. It may be one of the greatest inventions since the printing press, as the cliché goes, and it has unquestionably revolutionized communication and commerce. But as David Pierce observed in Protocol shortly after the January 6 Capitol riot, “Everything is IRL” now. “[T]he barriers between online and offline life have disappeared completely.”1

The idea of the internet as just another tool “in real life” has been gaining strength for some time. The Associated Press, for example, took the internet down a peg five years ago by lowercasing it in the AP Stylebook.2 Policymakers and the public are increasingly recognizing that the internet can lead to bad outcomes as easily as good ones. Hence the aphorism that, on social media, misinformation spreads faster than truth.3 Some even argue that the internet is causing more harm than benefit, a view shared by a surprising number of technology company employees.4

This growing realization is calling into question internet exceptionalism—the notion that the internet is so unique, yet so delicate, that we must shield it from the ordinary legal mechanisms that pro­mote accountability.5 Internet exceptionalism is prominently embod­ied today in Section 230 of the Communications Act,6 which courts have applied in a way that largely immunizes online platforms from the obligation that most businesses have to take reasonable steps to prevent the use of their services from causing harm.

Concerns over unlawful conduct on user-generated content plat­forms, however, have come to a head. It is time to restore for such platforms the common law duty of reasonable care7 that—but for the current application of Section 230—would hold them accountable when they negligently, recklessly, or knowingly facilitate harm. To accomplish that, Congress should amend Section 230 to require that user-generated content platforms take reasonable steps to curb un­lawful conduct as a condition of receiving the section’s liability protections.

Misinformation, bias, and hate speech are more challenging to address because they often involve constitutionally protected expres­sion and editorial discretion. One way forward on these issues, however, would be for Congress to adopt transparency provisions requiring that platforms (1) inform users of the platforms’ content moderation policies; and (2) provide a process for users to challenge the platforms’ decisions to take down or leave up particular content under those policies.

The Era of Internet Exceptionalism:
Visions of an Online Utopia

One of the most vocal champions of internet exceptionalism was Electronic Frontier Foundation cofounder and Grateful Dead lyri­cist John Perry Barlow, who passed away in 2018. In A Declaration of the Independence of Cyberspace, penned from Davos twenty-five years ago, Barlow threw down the gauntlet:

Governments of the Industrial World, . . . I come from Cyber­space, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. . . . You have no sovereignty where we gather. . . . You claim there are problems among us that you need to solve. . . . Many of these problems don’t exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. . . . Our world is different. . . . Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here. . . . The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. . . . We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.8

Barlow wrote his manifesto to protest the Communications De­cency Act.9 Adopted by Congress as Title V of the 1996 Telecommunications Act, the CDA sought to address indecent, obscene, and violent transmissions by telephone, on television, and over the inter­net.10 Section 509 of the CDA added Section 230 to the Communications Act.11

That Barlow inked the Declaration the same day President Clinton signed the Telecom Act was no coincidence. Attempts to legislate acceptable conduct online offended Barlow. In his view, the internet was separate from the physical world and without geographic bor­ders, placing it beyond the reach of governments.12 As the Declaration makes clear, he believed the internet could be exclusively self-gov­erned through voluntary codes of conduct.

Presumably much to Barlow’s relief, CDA provisions criminalizing certain indecent communications online were struck down on First Amendment grounds following lawsuits supported by the EFF and others.13 Provisions requiring scrambling of adult cable channels were also struck on First Amendment grounds.14 Provisions criminalizing obscene communications online were left intact, however.15 Provisions regarding the creation of the television rating code and re­quiring certain televisions to include “V-chip” technology that allows viewers to block programming based on the ratings also remain.16 Section 230 of course remains as well.

Section 230 and Stratton Oakmont v. Prodigy

In creating Section 230, Congress had two goals: to promote the availability of internet platforms for communication and commerce with minimal regulation;17 and to counter harmful, unlawful, and unwelcome conduct online.18 Congress was particularly motivated by the 1995 New York trial court decision in Stratton Oakmont v. Prodigy.19 Someone on Prodigy’s “Money Talk” bulletin board had anonymously leveled claims of criminal securities fraud against Stratton Oakmont—the investment company that would eventually inspire the movie Wolf of Wall Street. After Stratton Oakmont sued Prodigy for libel, the court ruled that Prodigy’s use of human and automated content moderation rendered it a “publisher.”

Under libel law, publishers—those with editorial discretion over another’s content, such as producers of books and periodicals—can be found culpable even when they have unknowingly published something false and defamatory.20 Because Prodigy moderated at least some posts, the court concluded that Prodigy could be held liable as a publisher of the anonymous post about Stratton Oakmont if the post was found unlawfully defamatory, even though there was no evidence that Prodigy had been aware of the post.

Congress, concerned that platforms would become reluctant to moderate content for fear of incurring liability under Prodigy,21 created Section 230 to prevent content moderation efforts from serv­ing as the basis for federal or state civil liability or state criminal liability.22 In particular, Section 230(c)(2) states that:

[n]o provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.23

The theory was that exempting platforms from liability when they seek to prevent harmful behavior would encourage them to do so. Unfortunately, courts have applied other language in Section 230 to shield platforms not only when they do moderate, but even when they do not.

Section 230’s Misincentive:
Removal of the Duty of Reasonable Care

Ordinarily, businesses have a common law duty to take reasonable steps to not cause harm to their customers, as well as to take reasonable steps to prevent harm to their customers.24 That duty also creates an affirmative obligation in certain circumstances for a business to prevent one party using the business’s services from harming another party.25 Thus, platforms could potentially be held culpable under common law if they unreasonably created an unsafe environment, as well as if they unreasonably failed to prevent one user from harming another user or the public.

Section 230(c)(1), however, states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”26 Courts have concluded that this provision “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”27

Based on that rationale, judges have ruled that they cannot even consider claims that platforms have negligently, recklessly, or know­ingly facilitated terrorism,28 harassment,29 sexual disparagement,30 nonconsensual dissemination of intimate photos,31 housing discrimination,32 distribution of child sexual abuse materials,33 and other unlawful conduct by platform users. Offering perhaps the most concrete legal embodiment of Barlow’s internet exceptionalism, Section 230(c)(1) as applied by the courts has exempted platforms from the common law duty of taking reasonable steps to prevent users of their services from causing harm.

Ironically—and in many cases tragically—this reduces the likeli­hood platforms will moderate content, the opposite of what Congress intended.34 Free from potential liability, platforms have a financial interest in minimizing spending on proactive measures to prevent unlawful activity, and even on reactive measures to mitigate further harm when unlawful activity has already occurred. As a result, instead of creating an incentive to moderate content, Section 230 creates a misincentive. Platforms can conserve resources and invest more reck­lessly in growth, giving them a competitive edge over their brick-and-mortar rivals and allowing them to shift onto society the costs of combating (or failing to combat) harm.

Barlow’s Miscalculation

In the brick-and-mortar world, legal accountability helps check human failings. Section 230 as applied by the courts has removed much of that legal accountability from the online world.35 It should come as no surprise, then, that we are witnessing an explosion of illicit activity online,36 such as fraud,37 wildlife trafficking,38 looting of antiquities,39 sale of unsafe products,40 identity theft and theft of personal information,41 spread of malware,42 housing discrimination,43 harassment,44 unlawful drug sales,45 cyberattacks,46 terrorist activity,47 espionage,48 nonconsensual dissemination of intimate images,49 and the proliferation of child sexual abuse materials.50

Barlow’s aspirations of internet exceptionalism may have been laudable, but his assumptions were flawed. Problems do exist online, and platforms are failing to adequately address them by their own means. The social contract has been breached. And as it turns out, the cyber world is not different. We are seeing all the ills of the brick-and‑mortar world replicated online. The Golden Rule is failing, creating a “virtual world” that is no more humane or fair than the “real world.” In many ways, human failings are not minimized by the internet’s virtual nature but magnified by it.

Against this backdrop, even technology companies’ own employees are now calling for Section 230 reform. Indeed, among tech employees who know what Section 230 is, close to three-quarters say Congress should change it, according to a recent survey.51

Restoring the Duty of Reasonable Care

The solution is for Congress to amend Section 230 to require that platforms take reasonable steps to curb unlawful conduct as a condi­tion of receiving the section’s liability protections.52 That would remove the current misincentive regarding content moderation and better accomplish Congress’s original objective of increasing the likelihood that platforms will curb harm online.

Barlow was not wrong to be concerned that regulating the internet can harm innovation and expression. But applying the common law duty of reasonable care is not regulating. Regulation involves advance restrictions on the permissible business models of multiple, similarly situated entities.53 Applying the duty of reasonable care would not limit platforms’ discretion over their business models on the front end. They would be free to experiment. If, however, an individual platform uses its discretion unreasonably and causes harm in a particular circumstance, it could be held accountable on the back end, just like most other unregulated businesses. This back-end accountability would also prompt more responsibility from the start,54 encour­aging “responsibility by design.”

The platforms want the public and policymakers to believe that any reform of Section 230 would be tantamount to eliminating the provision and, with it, free speech on the internet. Indeed, they not only make these claims themselves, but are also spending millions of dollars supporting associations and front groups to make these claims on their behalf.55

I am not urging Congress to repeal the provision, however, just to amend it. And, as always, the First Amendment governs free expression in the United States—including on the internet.56 Courts would be obligated to refrain from imposing liability on platforms where doing so would unconstitutionally chill lawful content.

In reality, platforms oppose Section 230 reform because they want to continue avoiding liability (and to continue making money) even when they negligently, recklessly, or knowingly disregard unlawful activity. But that’s not a winning argument.

So Section 230 reform critics continue to argue that subjecting platforms to potential liability for user behavior would recreate the problems of the Prodigy decision, prompting platforms to either shun user-generated content or refrain from moderating entirely.57 But if the Section 230(c)(2) safe harbor for content moderation remains, the original Prodigy impediment to moderation would still be solved. Platforms could continue to carry user-generated content without fearing that the mere act of moderating would create liability, which means they could continue to serve as avenues for free expression. And conditioning liability limitations on taking reasonable steps to curb unlawful behavior would help ensure that platforms actually moderate rather than just abdicate responsibility.

Alternatively, critics argue that the threat of liability will prompt platforms to over-moderate, reflexively taking down content without doing any due diligence, including in response to spurious claims by people who dislike particular content that is nonetheless lawful.58 If platforms do so, however, they risk losing their shield because of the existing section 230(c)(2) requirement that they moderate in “good faith” to receive the liability limitation.

If competition is as robust and barriers to online entry as low as the platforms say, a multiplicity of online outlets (in additional to the more traditional ones) will remain available for differing viewpoints. Platform apologist NetChoice, for example, argues that “while the major tech businesses are unquestionably popular among American consumers and have grown substantially over the past two decades, they are far from monopolies.”59 NetChoice says that Google, Face­book, and Twitter compete with each other, LinkedIn, Amazon, “and a wide variety of other online advertising platforms,” as well as with television, billboards, radio, and newspapers.60 And it argues that Google and Facebook control less than 32 and 24 percent of the digital advertising market, respectively.61 But even if NetChoice is correct that platform competition abounds, the question remains whether users will have responsible, healthy internet platforms from which to choose, especially when those platforms are legally exempt from the ordinary duty of reasonable care.

Next, critics argue that the volume of content the platforms carry makes curbing all unlawful activity impossible, especially for smaller providers.62 But the duty of care does not require perfect moderation, just reasonable effort. The reasonableness standard is also flexible and has been developed over more than a hundred years of precedent.63 Courts will take into account the severity of a harm, its foreseeability, the cost to combat it, and the resources available to the platform. In addition, smaller platforms will often have fewer users and uses to moderate.

Critics also argue that the cost of litigation will chill innovation, even when the litigation is without merit.64 But that is an argument for general tort reform, not continuation of a perverse immunity reserved for platforms.

In any event, a recent Internet Association study indicates that the majority of “Section 230 cases” are resolved on grounds other than Section 230.65 Restoring the duty of reasonable care won’t change the outcome in those cases. People will still bring flawed lawsuits and platforms will still succeed in having them dismissed early for failure to state a cognizable legal claim or other defects.66 But where plaintiffs have colorable claims that the platforms acted negligently, or worse, justice warrants the cases at least be heard. And where the platforms acted reasonably, they will still prevail.

Increasing Transparency

Misinformation, bias, and hate speech are more challenging to address than clearly unlawful conduct because those issues often involve lawful expression and the constitutionally protected editorial discre­tion of platforms.67 One approach for those issues, however, would be to hold platforms accountable under their terms of service, as well as to apply section 230’s “good faith” language.

For example, if a platform failed to take down misinformation or hate speech despite applicable language in its terms of service, a user could bring a breach of contract or unfair and deceptive acts or practices claim.68 A user concerned that a platform did take down content, but in a way that did not comply with the policies and procedures in the platform’s terms of service, could argue that the platform lost its section 230(c)(2) shield by failing to act in good faith. In either case, the platform would not be automatically liable. The party bringing the claim would still need to prove some cause of action and that the platform had caused cognizable harm. And courts would still be obligated to apply a First Amendment analysis.

Platforms’ terms of service, of course, might be silent or vague on particular issues, or say that the platforms reserve the right to take unilateral action or no action at all. To address this, Congress could create transparency provisions requiring platforms to adopt and disclose content moderation policies addressing (1) what content the platforms will take down and leave up; (2) how people can file complaints about deviations from those policies; (3) how people can appeal the platforms’ decisions under those policies; and (4) disclosure of aggregated data regarding complaints, takedowns, denial of takedown requests, and appeals.69

Requiring platforms to adopt such moderation policies would not run afoul of the First Amendment because Congress would not be dictating what the platforms must, or may not, take down or leave up.70 So long as a platform has a policy and informs its users what the policy is, the policy could even be that the platform will not take down any lawful speech.

Requiring platforms to publicly disclose their content moderation policies, along with aggregated complaint data, also would not violate the First Amendment. The government may compel a commercial enterprise to provide “purely factual and uncontroversial information about the terms under which [its] services will be available” where the “disclosure requirements are reasonably related to the State’s interest in preventing deception of consumers.”71

Although Congress could not constitutionally require platforms to allow or prohibit posting of certain lawful speech, these transparency requirements would enable consumers to better evaluate platforms’ practices in deciding what platforms to use. New entrants and existing providers might also use such information to compete based on content moderation practices.

Reasonable Steps

Barlow saw the internet as an ethereal realm occupied only by better angels. But his views failed to account for human nature or the fact that, without legal accountability, even virtual communities break down. And although Congress passed Section 230 with the best of intentions, as applied by the courts, it makes content moderation less likely, not more, as well as, arguably, less transparent and more arbitrary.

Requiring platforms to take reasonable steps to curb unlawful activity as a condition of receiving Section 230’s protections would restore the common law duty of care. It would provide platforms a safe harbor when they act responsibly, enabling them to continue innovating and serving as healthy venues for commerce and expression, without giving them a pass when they act negligently or worse. Most importantly, it would bring Section 230—as well as the inter­net—into the real world, creating a less toxic online experience for everyone in the process.

This article originally appeared in American Affairs Volume V, Number 2 (Summer 2021): 179–91.

Notes
1 David Pierce, “The End of the Online vs. Offline Debate,” Protocol, January 10, 2021.

2 Lauren Easton, “Ready to Lowercase ‘Internet’ and ‘Web,’” Associated Press, April 2, 2016.

3 Peter Dizikes, “Study: On Twitter, False News Travels Faster Than True Stories,” MIT News, March 8, 2018.

4 Approximately 40 percent of 1,505 technology employees surveyed said that the technology industry causes more harm than good. Emily Birnbaum and Issie Lapowsky, “How Tech Workers Feel about China, AI and Big Tech’s Tremendous Power,” Protocol, March 15, 2021.

5 See Neil Turkewitz, “The Song Remains the Same: Exceptionalists against the Application of the Law,” Los Angeles & San Francisco Daily Journal, January 12, 2017.

6 47 U.S.C. § 230.

7 See Dan B. Dobbs et al., Hornbook on Torts, 2nd ed. (St. Paul, Minn.: West Academic Publishing, 2015), 203–5, 465–66.

8 John Perry Barlow, “A Declaration of the Independence of Cyberspace,” Electronic Frontier Foundation, February 8, 1996.

9 See Jack Goldsmith and Tim Wu, Who Controls the Internet?: Illusions of a Borderless World (Oxford: Oxford University Press, 2008), 19–20.

10 Telecommunications Act of 1996, Pub. L. No. 104–104, §§ 501–61, 110 Stat. 56, 133–43.

11 Telecommunications Act of 1996, § 509, 110 Stat. 137–39.

12 Goldsmith and Wu, Who Controls the Internet?, 17–19.

13 Reno v. ACLU, 521 U.S. 844 (1997); Shea v. Reno, 930 F. Supp. 916 (S.D.N.Y. 1996), aff’d, 521 U.S. 1113 (1997).

14 U.S. v. Playboy Entertainment Group., 529 U.S. 803 (2000).

15 ACLU, 521 U.S. 844.

16 110 Stat. 140-42, § 551 (creating 47 U.S.C. §§ 303(w)–(x), 330(c)).

17 47 U.S.C. § 230(a)(1), (a)(3)–(5), (b)(1)–(2).

18 47 U.S.C. § 230(a)(2); finding that internet platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops”; 47 U.S.C. § 230(b)(3)–(5); making it U.S. policy to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material,” and to “ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.”

19 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).

20 W. Page Keeton et al., Prosser and Keeton on the Law of Torts, 5th ed. (St. Paul, Minn.: West Academic Publishers, 1984), 803, 810. For a detailed analysis of the elements of libel and some of the seminal online libel cases pre- and post-Section 230, see Neil Fried, “Dodging the Communications Decency Act When Analyzing Libel Liability of On-Line Services: Lunney v. Prodigy Treats Service Provider like Common Carrier Rather Than Address Retroactivity Issue,” Columbia Science Technology Law Review 1 (November 1999).

21 See Neil Fried, “Revisiting Prodigy: A Section 230 Thought Experiment,” DigitalFrontiers Advocacy, July 7, 2020.

22 47 U.S.C. § 230(c), (e). See also Telecommunications Act of 1996, S. Rep. 104–230, at 194 (1996) (Conf Rep.).

23 47 U.S.C. § 230(c)(2), (c)(2)(A).

24 Fowler V. Harper and Posey M. Kime, “The Duty to Control the Conduct of Another,” Yale Law Journal 43 (1934): 886–87.

25 Harper and Kime, Yale Law Journal 43, 887–88; Dobbs et al., Hornbook on Torts, 459–60, 465–66, 615–16, 620–21, 633–44, 651–55.

26 47 U.S.C. § 230(c)(1).

27 Zeran v. AOL, 129 F.3d 327 (4th Cir. 1997) (emphasis added).

28 Force v. Facebook, 934 F.3d 53 (2d Cir. 2019).

29 Herrick v. Grindr, 18-396 (2d Cir. Mar. 27, 2019).

30 Jones v. Dirty World Entertainment Recordings LLC, 755 F.3d 398 (6th Cir. 2014).

31 Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009).

32 Chicago Lawyers’ Committee for Civil Rights. v. Craigslist, 519 F.3d 666 (7th Cir. 2008).

33 Doe v. AOL, Inc., 783 So.2d 1010 (2001).

34 See Neil Fried, “Why Section 230 Isn’t Really a Good Samaritan Provision,” DigitalFrontiers Advocacy, March 24, 2021.

35 See Neil Fried, “Time for the Section 230 Pendulum to Swing,” DigitalFrontiers Advocacy, September 18, 2020.

36 See Neil Fried, “When It Comes to Section 230, Something Must Change,” DigitalFrontiers Advocacy, September 3, 2020.

37 Sabri Ben-Achour, “The Most Common Scams in the U.S. Involve Online Purchases,” Marketplace, October 28, 2019.

38 Kurt Wagner, “A Black Market in Wildlife Trafficking Thrives on Facebook and Instagram,” Los Angeles Times, July 12, 2019.

39 Karen Zraick, “Now for Sale on Facebook: Looted Middle Eastern Antiquities,” New York Times, May 9, 2019.

40 Andrew Martins, “Online Searches Often Lead Customers to Counterfeit Goods,” Business News Daily, October 21, 2019.

41 Zoya Gervis, “More Than 60% of Americans Say They’ve Been a Victim of an Online Scam,” New York Post, December 6, 2019.

42 Charlie Osborne, “The Hacker’s Paradise: Social Networks Net Criminals $3bn a Year in Illicit Profits,” ZDNet, February 26, 2019.

43 Aaron Rieke and Corrine Yu, “Discrimination’s Digital Frontier,” The Atlantic, April 15, 2019.

44 Angela Chen, “The Legal Crusader Fighting Cyberstalkers, Trolls, and Revenge Porn,” MIT Technology Review, August 26, 2019.

45 Nitasha Tiku, “Whistleblowers Say Facebook Has Not Warned Investors about Illegal Activity, in New SEC Complaint,” Washington Post, May 27, 2020.

46 Danny Palmer, “CEOs Are Deleting Their Social Media Accounts to Protect against Hackers,” ZDNet, January 28, 2020.

47 Desmond Butler and Barbara Ortutay, “Facebook Auto-Generates Videos Celebrating Extremist Images,” Associated Press, May 9, 2019.

48 Catalin Cimpanu, “FBI Warning: Foreign Spies Using Social Media to Target Government Contractors,” ZDNet, June 18, 2019.

49 Cara Bayles, “With Online Revenge Porn, the Law Is Still Catching Up,” Law360, March 1, 2020.

50 Michael H. Keller and Gabriel J. X. Dance, “Child Abusers Run Rampant as Tech Companies Look the Other Way,” New York Times, November 9, 2019.

51 71 percent of technology employees surveyed who knew what Section 230 is supported reform; Birnbaum and Lapowski, Protocol.

52 See Hearing on “Disinformation Online and a Country in Crisis,” before House Subcommittee on Communications and Technology and House Subcommittee on Consumer Protection and Commerce, House Committee on Energy and Commerce, 116th Cong. (2020) (statement of Neil Fried, Principal, DigitalFrontiers Advocacy); Hearing on “Fostering a Healthier Internet to Protect Consumers,” before House Subcommittee on Commcations and Technology and House Subcommittee on Consumer Protection and Commerce, House Committee on Energy and Commerce, 116th Cong. (2019) (statement of Prof. Danielle K. Citron, Boston University School of Law).

53 Dobbs et al., Hornbook on Torts, 10.

54 Dobbs et al., Hornbook on Torts, 6, 10–11.

55 See “Content Moderation: Section 230 of the Communications Decency Act,” Internet Association, accessed April 14, 2021; “Section 230: The “Most Important Law in Tech,” Computer, Communications & Internet Association, accessed April 14, 2021; “incompas to FCC: If You Shut Down Section 230, You Shut Down Competition,” incompas, accessed April 14, 2021; “Section 230,” NetChoice, accessed April 14, 2021; “Section 230,” Public Knowledge, accessed April 14, 2021; “CDA 230,” Electronic Frontier Foundation, accessed April 14, 2021.

56 See Neil Fried, “Instead of Cry Wolf, Platforms Should Focus on Predators Within,” DigitalFrontiers Advocacy, August 13, 2020.

57 See Berin Szoka et al., “Why Section 230 Matters and How Not to Break the Internet,” TechDirt, February 21, 2020.

58 See Eric Goldman, “How Section 230 Enhances the First Amendment,” American Constitution Society, July 2020, 4–5.

59 Hearing on “Reviving Competition,” Part 3, “Strengthening the Laws to Address Monopoly Power,” before the House Judiciary Subcommittee on Antitrust, Commerical, and Administrative Law, 117th Cong. (2021) (comment of NetChoice, “Promoting Competition and Fostering Opportunity for American Consumers”).

60 Hearing on “Reviving Competition” (comment of NetChoice).

61 Hearing on “Reviving Competition” (comment of NetChoice).

62 See Matt Perault, “Section 230: A Reform Agenda for the Next Administration,” Day One Project, October 26, 2020.

63 See Neil Fried, “Section 230 Doesn’t Need to Be Repealed. We Can Reform It.,” Protocol, November 30, 2020.

64 See Szoka, TechDirt.”

65 Internet Association, “New Study of 500 Section 230 Decisions Shows the Law’s Broad Benefits” (press release), July 20, 2020.

66 Neil Fried, “IA Study Shows Sec 230 Reform Would Have Impact Only Where Needed,” DigitalFrontiers Advocacy, August 3, 2020.

67 See Prager University v. Google, No. 18-15712, slip. op at 5 (9th Cir. Feb. 26, 2020); Manhattan Community Access Corp. v. Halleck, 139 S.Ct. 1921, 1930 (2019).

68 See Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009).

69 See Neil Fried, “Senate Hearings on Section 230 Reform Show Bipartisan Way Forward,” DigitalFrontiers Advocacy, November 23, 2020.

70 Turner Broadcast. Sys., Inc. v. FCC, 512 U.S. 622, 642–43, 659–62 (1994).

71 Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626, 651 (1985).


Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink