Blog

The Debate Over Liability for AI-Generated Content

By: Kristin Rheins / 08.08.2023

Lawmakers and industry professionals alike are beginning to think about the future implications of artificial intelligence and, specifically, generative AI. Generative AI analyzes and processes existing data patterns to generate new texts, images, and musical compilations. The objective is to mimic man-made results so precisely that the artificially generated content is indistinguishable from that of man-made content. This poses the question, as humans and their actions on the internet are bound by the law — how should we go about establishing rules for generative AI?

An area where the distinction between human and AI-generated content will be particularly important is content liability. Put simply, is the company operating a chatbot liable for generated text and images, or does the liability fall on the user who asked for it to be generated? If ChatGPT writes a blog post full of misleading statements about an individual, is ChatGPT responsible for producing defamatory content? Or, alternatively, is it the responsibility of the user that posts the blog?

Currently, Section 230 of the Communications Decency Act of 1996 prevents a website that displays third-party content from being considered the legal publisher of that content, placing the liability for online content on those who post it. This not only insulates social media websites from liability, but also e-commerce websites and any other platforms that publish consumer reviews. In its application to generative AI, we need to consider the degree to which the user is responsible for the output of the content, and thus how much responsibility should be given to the tool itself.

Two opposing interpretations of whether or not Section 230 liability protections should apply to AI tools exist in the current debate. The first argues that content generated by artificial intelligence applications are driven by third-party input and thus Section 230 protections should apply, while the other maintains that Section 230 protects third party content, not content generated by a platform. The answer to AI’s legal status lies between the two: Do generative AI tools like OpenAI’s ChatGPT and Dall-E qualify as material contributors of the content they generate, removing them from Section 230’s scope?

The first legal argument would mean shielding generative AI from liability under Section 230’s safe harbor because generative AI output is fueled by human input, thus the application is not a material contributor to the content. This would be an admission that generative AI is not responsible for the content it produces. Rather the liability falls on the initial third-party input and a wealth of pre-existing data pulled from the web which create the content in question.

Benefits of lending Section 230 protections to generative AI include the freedom to innovate unfettered by costly lawsuits and a healthier competitive market. While larger, established companies have robust legal teams and funds to take on liability litigation, smaller start-ups, and independent developers are not as readily able to take accountability for every statement produced by a program that can be manipulated by users. Holding the companies that develop artificial intelligence technology liable for potential mistakes could produce a chilling effect that prevents smaller entities from entering the market in the first place and allows larger ones to dominate.

Arguments opposing Section 230’s application to generative AI claim that generative AI does not have a passive role in creating the content it produces. Section 230 protects the ability for third parties to post on platforms. Organization and editing of third-party data may transform the AI into a material contributor, therefore not qualifying for Section 230 immunity. Because generative AI creates content rather than hosts content like a Reddit message board or a Twitter timeline, it could be argued that it has decidedly more editorial agency over what is created.

Former Representative Chris Cox and Senator Ron Wyden, the co-authors of Section 230, claim that “Section 230 is about protecting users and sites from hosting and organizing users’ speech” and was never intended to shield companies from the consequences of the products they create. Further, Sam Altman, the CEO of OpenAI, testified during a Senate hearing that he believes Section 230 does not provide an adequate regulatory framework for generative AI tools and that innovators must work with the government to formulate new solutions.

Without Section 230 protection, the primary risk associated with holding companies accountable is when users directly prompt AI to generate unlawful conduct. Should authorities view the AI as actively creating those harmful messages or as a passive platform that enables its users to spread violent or false information? If the former is true, developers must anticipate and mitigate all possible inputs that could produce harmful content. Further, free expression will inevitably be limited. Because these platforms would be exposed to a new mass of litigation, curbing legal risk would require censoring both malicious and legitimate speech to reduce the risk of culpability.

While a clear answer to generative AI’s liability has yet to be reached, some policy experts suspect that Section 230 may only shield companies in specific scenarios. Even if each harmful action inflicted by AI is evaluated by its specific facts alone, the amount of creative license that AI has over its output is difficult to measure with reproducible accuracy and precision. Even if section 230 is the appropriate regulatory structure for now, generative AI raises broader theoretical questions about the future of content moderation.