PPI - Radically Pragmatic
  • Donate
Skip to content
  • Home
  • About
    • About Us
    • Locations
    • Careers
  • People
  • Projects
  • Our Work
  • Events
  • Donate

Our Work

Artificial Intelligence, Not Artificial Litigation

  • October 6, 2025
  • Philip S. Goldberg
  • Josh Hansen
Download PDF

United States-based companies have become world leaders in generative artificial intelligence (AI), which is transforming our lives—from creating better health care diagnostics and treatment to spurring new areas of economic growth. However, even when developed, deployed and used properly, AI can make mistakes. And, some people will use AI for nefarious purposes. These dynamics have led federal and state governments to actively consider how best to regulate generative AI to reduce these risks without impeding the pathways to innovation.

The federal government started this regulatory effort in 2023 with an Executive Order instructing agencies to develop policies to strike this balance. The current President built on this Order by prioritizing an environment in which American AI innovation can flourish. In the states, Colorado became the first to enact a broad AI-specific law in 2024, giving the state attorney general authority to issue AI regulations. This year, Texas adopted a narrower framework. In addition, the California General Assembly passed an AI bill last year, but Governor Newsom vetoed it, opting to allow more time to get this balance right. Governor Youngkin in Virginia did the same this year. In other states, major AI bills have been introduced, but none have been enacted.

Given the economic and national security imperative with AI, state leaders from both political parties have appreciated that getting AI regulation right is critical and that doing the wrong thing poses a risk to the nation as a whole. They have also made clear that while AI may be new, it is just a tool. The fraudulent, unfair or deceptive use of AI—just as with any other software—is already unlawful. State law enforcement officers already have the tools to protect their people.

Accordingly, one of the most controversial ideas making the rounds in the states is introducing what has been called “vigilante actions” or, in legal terminology, private rights of action, for AI enforcement. In these suits, private, for-profit lawyers—not government prosecutors—would be allowed to sue anyone they claim violated the law, regardless of how speculative the assertion or whether anyone was harmed. The lawyers would then keep the statutory penalties for themselves.

Federal and state policymakers have learned the hard lesson that when private lawyers can make money enforcing regulations, they do not necessarily make decisions that are in the public interest. They will often generate high-dollar litigation over minor, technical violations—including when the alleged violation did not harm anyone and even when the violation may not have actually occurred. When similar private rights of action have been included in other regulatory regimes, their trail has been littered with these types of abusive lawsuits, as well as settlements focused on generating money for lawyers, not providing value to consumers, employees or anyone else.

This white paper details the history of private rights of action, how they have led to lawsuit abuse, and why they are neither needed, nor appropriate for regulating AI. Private litigation should stay in its lane. It should be reserved only for seeking remedies for people injured from alleged wrongdoing. And, as Massachusetts Attorney General Campbell made clear last year, people already have robust avenues for seeking such remedies when it comes to AI. Creating more ways for private lawyers to sue over AI is not needed and will cause more harm than good.

Read the full paper here. 

  • Never miss an update:

  • Subscribe to our newsletter
PPI Logo
  • Twitter
  • LinkedIn
  • Facebook
  • Donate
  • Careers
  • © 2025 Progressive Policy Institute. All Rights Reserved.
  • |
  • Privacy Policy
  • |
  • Privacy Settings