With the Supreme Court set to take on Gonzalez v. Google this term — a case with momentous implications for the legal viability of internet services as we know them — the fate of Section 230 of the Communications Decency Act is in question.
Section 230 is the statute that grants online platforms protection from liability for the content posted by their users, a fundamental protection that has been integral to the internet ecosystem’s explosive growth. By allowing internet platforms to take down third-party content that they deem harmful to their users in “good faith,” while also ensuring that they are not treated as the publishers of such content, Section 230 is the legal mechanism that has facilitated innovative business models which give a platform to user-generated content, shaping a robust digital economy enjoyed by both consumers and entrepreneurs today.
From a consumer standpoint, these business models provide a plethora of free resources, entertainment, and educational materials. In the case of entrepreneurs, the online creator economy is estimated to be worth more than $100 billion worldwide, with more than 425,000 full-time equivalent jobs reportedly supported by the YouTube platform alone.
In Gonzalez v. Google, however, the central question goes beyond Section 230’s protections for third-party content, asking instead whether the targeted algorithms employed by these platforms enjoy the same protections.
Most major online platforms use data at various levels to recommend content to users, whether by using specific personal or demographic information to tailor the experience to a user’s specific interests, or by presenting users with popular or relevant content at the top of the feed. These automated decisions curate a feed appealing to the user and beneficial to the content creator, whose work is then highlighted to new audiences.
Colloquially, those referring to social media algorithms are most likely referring to the sophisticated code used by many of these companies to target content to their users. However, from a technological standpoint, the term “algorithm” refers to any type of content sorting — whether it be a simpler iteration that might show content ordered chronologically or alphabetically, or the more complex, individually curated version. There is no default method of content sorting, meaning that every company or developer must choose an algorithm to sort content. The only difference lies in the complexity of the algorithm they chose to employ.
It is difficult to draw legal lines around this complexity. For example, if a platform lists a seemingly harmful piece of content first, thus making it the most obvious choice for users to select, are they liable for that content only if proven to be the result of a curated algorithm, or are they also liable if the reason it is listed first is that the feed is shown chronologically? Either way, there is some risk of exposing users to harmful third-party content. Prohibiting one platform’s algorithms — say Google’s — thus doesn’t provide a general solution.
That means the Supreme Court would either have to define algorithms in such a way that only specific types are implicated for liability, or rule that Section 230 liability protections are lost for all types of content displayed online.
Let’s be clear: A court decision that ended Section 230’s liability protections would make hosting third-party content functionally impossible for websites. On YouTube alone there are over 500 hours of content uploaded every minute, making vetting every video prior to each upload a monumental task. The risk of getting sued will lead most companies to conclude that it’s not worth it for them to offer third-party content. And in the case where only data-driven, targeted algorithms are ruled to be exposed to liability, how likely is it that users would sort through 500 hours of content with no curation in hopes of discovering useful information?
Moreover, the ramifications of making all content providers liable to lawsuits will spread across the entire Internet ecosystem, including online shopping, travel sites, and app stores, all of which rely on user reviews that are curated to reduce fakes and “ballot-stuffing.” In an era of deep fakes and sophisticated artificial intelligence chatbots, it’s all the more essential for online platforms to be able to apply algorithms that users can trust.
There’s no doubt that Section 230 raises difficult issues that need to be carefully considered by policymakers. But subjecting online platforms to lawsuits because their algorithms occasionally highlight content that someone objects to would fundamentally destroy the internet economy, while failing to address the threat posed by truly dangerous online content.