“Section 230” is a buzz word of late—heard on the Hill, on Pennsylvania Avenue, and now in the (virtual) board rooms of companies that allow their constituents to freely post anything from stories of their daily lives to product reviews. So what is it? And why should you care? This three part series breaks down the history, evolution and proposed future of Section 230; in this post, we explore the basics and history of Section 230.
What was the world like before Section 230?
Take yourself back to a world before the Internet. If a writer wanted to get ideas out to the world, he or she probably couldn’t do it alone. The writer would probably need a publisher to put ideas into print, and the publisher probably would need distributors (e.g., newsstands, bookstores) to get the printed materials out to readers.
Now, let’s say that writer said something that was wrong—maybe it was defamatory, or obscene, or violated copyright, or otherwise broke the law. It’s uncontroversial that the writer might be held accountable. But what about the publisher? What about distributors?
Publishers and distributors could, at least theoretically, be held accountable. Over time, the courts chipped away the liability by applying the First Amendment. In Smith v. California, 361 U.S. 147 (1959), for example, the Supreme Court held that bookstores couldn’t be liable for content unless they had actual knowledge of the wrongful content.
And in New York Times Co. v. Sullivan, 376 U.S. 254 (1964), the Supreme Court famously raised the bar even further and held that you can’t hold someone liable for defamation against published figures without “actual malice.” In that case, the actual malice principle protected the New York Times from a claim that it defamed a public officer simply by printing a paid ad written by a customer.
But these opinions were basically about First Amendment standards and the level of scienter needed to hold someone accountable for bad content. Neither of these decisions fundamentally changed the proposition that a distributer (e.g., bookstore) could be held liable for the content it merely sold, or that a publisher (e.g., newspaper) could be held liable for printing something written by someone else.
The problem was compounded when the Internet came around. In the 1990s, people rightly celebrated the “Information Superhighway” and its ability to facilitate the free flow of ideas. Message boards and other online fora let people post content that would be available across the globe within seconds. But would the people hosting those message boards be liable, just like a publisher or bookstore?
Courts had started to address the issue in the early 1990’s. In Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991), CompuServe, the owner of an online bulletin board, did not make any attempt to moderate any content, despite alleged defamatory content being posted. The court held that CompuServe was not liable for the user-created posts, and while there was now a precedent protecting online platforms, the threat of liability for user content still remained.
Further complicating matters, some cases came out the other way. In Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995), a court held that the owner of an online bulletin board was liable as the publisher of user-created content because it exercised some editorial control over the posted messages on their bulletin boards. That decision effectively discouraged online platforms from policing the platforms at all, since if they did so, they might be held liable for material that somehow slips between the cracks.
So what is Section 230?
In 1996, the Telecommunications Act was signed into law as the first major piece of telecommunications legislation since 1934 and included provisions related to the Internet and other “interactive computer services” (ICS). One such provision was Title V, which became known as the Communications Decency Act (CDA). Section 230 of the Act was designed to promote the continued use and expansion of the Internet while also protecting users, notably children and schools, from “objectionable or inappropriate online material” and behavior, such as stalking. However, Section 230 is most well-known for its provision, “Protection for ‘Good Samaritan’ blocking and screening of offensive material,” which provides safe harbor for ICS providers and users from actions against them based on the content of third-party users through two key clauses (paraphrased below, emphasis added):
(c)(1) No ICS provider or user shall be treated as the publisher or speaker of any information provided by another user.
(c)(2) No ICS provider or user shall be held liable due to (A) restricting content in good faith that it deems to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” or (B) enabling others the ability to restrict content described in (A).
Thanks to this distinction between a provider and a publisher of content, online platforms would not have liability for what other people put on the platforms, regardless of whether the provider has scienter or not. For online activity, this effectively eliminates the New York Times test for actual malice. And that meant that providers wouldn’t be responsible for the costly and impractical analysis that goes into moderating user-generated content. Over time, Section 230 (c)(1) and (c)(2) became seen as a general blanket protection for ICS providers.
Why has Section 230 become controversial?
We have now had almost a quarter century with Section 230, and it has become somewhat controversial. Some people think that online platforms don’t police content enough. These people worry about defamation, falsehoods, child predation, extremism, and other bad stuff proliferating on online platforms. Other people think that online platforms police content too much, or unequally, and is thereby squelching the freedom of expression that Section 230 was meant to protect. Some people worry about both.
In tomorrow’s post, we will explore the proposed reforms to resolve these concerns by President Trump, the Department of Justice and the Federal Communications Commission.