Blog

A Crash Course on Section 230: What it is and why it matters

By: Malena Dailey / 02.17.2023

Part of the Communications Decency Act of 1996, Section 230 has become a widely debated and frequently misunderstood staple in the conversation about the regulation of tech companies. With calls for reform coming from both sides of the political aisle, on its face it seems as though there is a certain level of consensus around this issue when it comes to the moderation of online content. However, diving just below the surface reveals that could not be more untrue. With Democrats and Republicans coming at the issue from entirely opposite sides and the impacts of Section 230 being commonly misrepresented, it’s critical that any efforts at reform take a measured approach which considers the true positive impact this law has had on the dramatic expansion of the internet.

What does Section 230 say?

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

No provider or user of an interactive computer service shall be held liable on account of — any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

What does this mean for the internet today? 

Section 230 applies to “interactive computer services,” which refer to any online platform or service which hosts third-party content. This means social media sites, but also product reviews on e-commerce sites, independent seller listings which utilize websites like Etsy or eBay, or any other case of an online service hosting the content of third-party individuals. The internet as we know it today is largely built around the model of this third-party content, with many online platforms and services relying on their users to provide information, services and products, and many small businesses relying on the infrastructure of other websites to reach audiences they couldn’t on their own.

To break it down from the beginning, Section 230 says that the website is not the publisher of third-party content, shifting responsibility away from the platform and onto the individual, who is liable for their own speech. If you were to post something defamatory on anything from a social media account to a WordPress blog, it would be you that is responsible for your comment, rather than the company you used to post it.

Section 230 is also the mechanism that allows online platforms to moderate content on their sites, allowing them to remove any content that they find objectionable, without being treated as the publisher. This may be because the website serves an intended purpose, such is the case with Reddit communities where posts unrelated to the forum are taken down, or because the platform does not want to host speech that they find dangerous or harmful to their users, such as misinformation or hate speech.

Who wants to change Section 230? 

Efforts to change Section 230 have come from several different directions. Proposals from congressional Democrats have included efforts to make platforms liable for health misinformation, or in cases where online activity has led to real world violence. The Biden Administration has also called for its repeal. While potentially well-intentioned, platform liability for this type of content will make it functionally impossible for websites to host third-party content, while shifting responsibility away from the root of the problem — those who are spreading misinformation and violent rhetoric. If a company is responsible for the speech of all their users, they will need to review and approve every piece of content posted to ensure they do not get sued. In the current model, the amount that is posted online everyday makes it so that despite moderation efforts and algorithmic flagging, companies don’t know exactly what is posted on their sites immediately and at all times. This would require a sort of cable model for the internet, where only pre-approved content is shown online, putting a stop to the flow of information we enjoy today and taking away the ability for individuals to have a voice in the public discourse.

On the other side of the aisle, Congressional Republicans have taken their own stab at Section 230, with the motivating factor being the alleged “censorship” of conservative voices online. Similar sentiments have been echoed by those ranging from the Trump Administration to Supreme Court Justice Clarence Thomas. In the states, Republican governors in both Texas and Florida have signed laws banning content moderation that is targeted at certain viewpoints. Critics of these proposals cite the likelihood that it will force online users to be inundated with harmful but legal content, such as misinformation, conspiracies, hate speech and Nazi propaganda, harassment, etc.

The combination of these two efforts is paradoxical. In a world where no moderation is allowed, and companies are responsible for the speech of all their users — as would be the case if Section 230 was repealed entirely — websites are forced to host the same speech, which will open them up to countless lawsuits.

Why is this important?

Section 230 has made it so that third-party content online has essentially propelled the creation of a new economy, with entrepreneurs able to sell products, post video or written content, and promote their work to an established audience at little or no cost. Consumers are used to information and entertainment at their fingertips, much of which is also provided to them at little or no cost. In an important sense, the powerful job production associated with the tech boom, would not have been possible without Section 230.

While there is always room for improvement, online platforms need to moderate content in order to maintain their purpose. And while this is often spoken about in the context of larger tech entities, it will have the same devastating impact on Google and Amazon as it would to any small, independent interactive website. It’s not in the best interest of Instagram for their users to be bombarded with violent posts, but it’s also important that the independent food blogger posting recipes is able to remove harmful content from the comment section. Exposing these entities to liability for the actions of any one individual would be a fundamental change to the internet as we know it, significantly cutting down our access to information and making it more difficult for individuals to have a voice online. If Congress or the courts decide to alter this system through which the internet has been able to grow, they must be aware of the consequences to independent businesses, individuals, and the future of online speech.