What America’s Governors and Mayors Can Learn from China’s Local-Facing Investment Strategy

INTRODUCTION

This policy paper illuminates China’s successful advanced technology investment strategy, with the goal of drawing lessons for U.S. policymakers. In particular, we show that China’s investment strategy is much more decentralized than is usually realized. The broad directions of science and technology investment policy are set in Beijing, but much of the execution and funding is undertaken by provincial and municipal officials. We explain the pluses and minuses of China’s “local-facing” investment strategy, and what it means for American governors and mayors.

First, to provide a starting point, we document the persistent weakness in state and local investment spending in the United States. In the United States, state and local government investment spending has stalled out over the past two decades, rising by only 15% in real terms from 2005 to 2025. Meanwhile, federal nondefense investment spending is up 87% over the same stretch, reflecting America’s top-down approach to tech investment policy.

Second, we explain how China’s local-facing investment strategy works, and why it has been successful. Unlike their U.S. counterparts, China’s provincial and municipal governments are pouring enormous sums of money into supporting advanced technology industries such as semiconductors, electric vehicles, satellites, biotech, humanoid robots, and the manufacture of key aerospace materials such as carbon fiber. One example: In 2025, the municipal government of Shenzhen, China’s third-largest city, backed a 5 billion yuan fund that would invest in chip design and other advanced technologies. That can be valued at somewhere between $700 million and $1.4 billion, depending on whether we use the official exchange rate of roughly 7 yuan to the dollar, or the purchasing power parity (PPP) rate of roughly 3.5 yuan to the dollar. In either case, it’s a substantial investment of resources for one city and one industry.

This government decentralization is essential to explaining how China’s government-led science and technology investment policy is able to push ahead on multiple technological frontiers simultaneously. Key advanced technology industries are being subsidized and supported by local policymakers who can move much faster and more flexibly than central government bureaucrats could. The result is the economic equivalent of a stampede — a barely controlled rush to add technological capacity without immediately worrying about profitability or the rapid buildup of debt.

Third, following on that insight, we show the downside of China’s success: Provincial and municipal governments have taken on astronomical levels of debt, on a scale that exceeds the U.S. AI investment boom. International Monetary Fund (IMF) estimates suggest that municipal and provincial Chinese governments, plus their affiliated “local government finance vehicles”(LGFVs), borrowed 16 trillion renminbi in 2024 and 2025, much of that from Chinese banks and state-owned enterprises. That’s the equivalent of adding more than $2 trillion in debt in two years.

Fourth, we identify what the U.S. can learn from China’s example. On the plus side, the success of the Chinese approach should encourage U.S. state and local governments to be more proactive in funding and supporting advanced technology industries. Key state and local investments should include AI data centers, AI application development, worker training, and AI extension programs; space-related infrastructure and manufacturing, funded by new financing tools such as “space bonds” and support for new ventures in advanced biosciences, manufacturing, construction, and agriculture.

However, financially prudent risk-taking is the key to sustainable growth. Chinese provincial and municipal governments have taken on massive debts that may impair long-term growth and perhaps even trigger a financial crisis. U.S. states and cities should invest in their future while keeping their borrowing under control.

Read the full report.

Fung in the Nevada Current: Vying to be a leader in allowing autonomous vehicles, Nevada is a lagger in regulating them

[…]

Surveys suggest that many Americans are distrusting of fully autonomous vehicles. But Andrew Fung, a senior analyst at the Progressive Policy Institute who is focused on economics and technology, says that is likely to change, and that it’s likely to change very quickly.

He referenced a study that found a 45 percentage point shift in public opinion in San Francisco between 2023 and 2025. San Francisco, like Las Vegas, has been a hub of AV testing.

“It seems like when people get their hands on these cars and can actually experience them and what they’re like, they’re generally very positive of them,” said Fung. “When they think about them as kind of an abstract thing, the sentiment is much less positive.”

Those findings could serve as a warning to state lawmakers: Get ahead of statutory and regulatory issues around safety, taxation and legal liability now because trying to do so after rapid adoption will be far more difficult.

“I would recommend legislators not take their eye off the ball,” Fung said. “Think about how to get ahead of the issues. How do we set ourselves up for success, so that you’re not chasing behind after the rollout comes?”

[…]

Read more in the Nevada Current

Mandel in PYMNTS: AI Policy Shifts From Innovation to Economic Payoff

Governments are reexamining technology as industrial policy increasingly confronts an era defined by artificial intelligence and uneven economic gains.

Dr. Michael Mandel, chief economist and vice president at the Progressive Policy Institute, told Competition Policy International (CPI), a PYMNTS company, in an interview that the core issue is not whether innovation is occurring, but where its benefits are accumulating.

“We’ve got rapid productivity growth in the information sector,” he said, “but productivity growth in the physical sector has slowed down to close to zero.” That divergence has left entire industries and regions lagging, creating what he described as both an economic and political problem that now sits at the center of industrial policy.

Watch the full interview

PPI Statement on the House Energy and Commerce Committee’s Kids’ Online Safety Markup

This week, House Energy and Commerce Chairman Brett Guthrie announced a full committee markup of the App Store Accountability Act (ASAA) as part of a new kids’ online safety package. The bipartisan Parents Over Platforms Act (POPA) was omitted. After months of productive bipartisan talks, negotiations collapsed, and Committee Democrats say the Majority is “moving forward with a partisan package that lets Big Tech off the hook.” PPI shares that concern. Child safety legislation that passes committee on a party-line vote has no path in the Senate, and this approach risks squandering months of good-faith work on an issue where real agreement was within reach.

Our concerns with the ASAA go beyond process. The bill’s age verification mandates don’t just affect teens — they require every adult to hand over identity documents to download any app, whether it’s Instagram or a weather widget. The Majority keeps pointing to Apple Pay as proof that this is easy, but about one in five Americans don’t have a credit card, and setting up Apple Pay often requires a government ID upstream. The burden falls hardest on those who can least afford it. Advocacy groups have warned that identity-linked verification causes adults to self-censor and marginalized users to disengage. Requiring universal ID collection to use an app store also runs counter to basic data minimization principles — the bill collects far more personal information than is necessary to keep kids safe, and shares it with every developer regardless of whether their app poses any risk.

The bill also moves in the opposite direction from the FTC, which just issued an enforcement policy statement defining “age verification” to include age estimation and inference — privacy-preserving approaches that the ASAA ignores. At the same time, the latest bill text explicitly exempts third-party app stores, which happen to be the actual channel where users download adult content apps that Google Play and the App Store prohibit. And while the Majority has taken a hard line on the need for rigorous verification, the new version of the bill lets parents simply attest to their child’s age — an internal contradiction that undercuts the rationale for making every adult produce an ID in the first place.

We’d also note that the argument made by the ASAA’s supporters — that minors shouldn’t be entering into contracts with app stores — doesn’t hold up. Under U.S. common law, minors can enter into contracts, but those contracts are voidable in every state. Minors already have more contractual protection than adults do.

PPI continues to believe that POPA is the stronger bill. It’s bipartisan, introduced by Reps. Auchincloss (D-Mass.) and Houchin (R-Ind.), and supported by child safety advocates, small businesses, and industry stakeholders. It focuses verification on apps that actually provide different experiences for adults and minors, rather than treating every download the same. It gives parents real tools instead of burying them in consent requests they’ll eventually tune out. Our polling shows that 70% of parents want protections that keep working while their kids use apps, not a one-time check at download, and only a third think app store verification alone will keep kids safe. POPA was designed with those concerns in mind, and unlike the ASAA, whose state-level counterpart was blocked by a federal court as overbroad, it’s built on a legal foundation that can hold up.

Rep. Auchincloss has filed an amendment to Thursday’s markup that would strike the ASAA language and replace it with POPA. We urge the Committee to support that amendment and use it as the starting point for legislation that can actually get to the president’s desk. Kids and parents can’t wait for Congress to get this wrong twice.

Assessing a New California Broadband Report

The broadband marketplace is intensely competitive. Cable is taking wireless market share, wireless is taking cable market share, satellite is nipping at the heels of everyone, and consumers are the winners. Broadband pricing has stayed well below the inflationary jumps that have been seen in many other industries/services.

This obvious competition, paired with low inflation, makes a new report from the Public Advocates Office (PAO) of the California Public Utilities Commission (CPUC) all the more curious. The report, “Broadband Competition and Pricing Strategies in California’s Urban Markets,” was based on an interesting new data set that looked at promotional broadband rates in four CA cities by detailed location, and compared those prices to the number of broadband providers and household income at those locations. The report’s goal, using regression analysis, was to show that fewer gigabit providers lead to higher prices and that providers were engaged in digital discrimination. 

We applaud the effort to assemble the data set. However, the analysis makes several fundamental mistakes. First, the pricing analysis left out important variables like population density (which could have easily been added at the census block level). A high-density area is generally cheaper to connect, on a per-household basis. As a result, high-density areas are more likely to attract new providers,  as well as leading existing providers to offer lower promotional rates. Conversely, low-density areas will typically have fewer providers and higher promotional rates. 

Thus, by leaving out density from their analysis, the report potentially found spurious correlations between fewer providers at a location and higher promotional rates. To put it another way, the first sentence of the executive summary says, “Broadband prices in California’s urban markets vary widely depending on the level and type of competition available to households.” But that’s tautologically true because competition was basically the only independent variable in their regression equation (the one exception was income, which we’ll discuss below). And disturbingly, the report even omitted results that did not show the desired correlation between price and competition. The report appendix notes that “Comcast is excluded from the regression analysis because its pricing strategy reflects large, market-wide discounts followed by secondary geographic variation that do not correspond to local competition intensity.” In other words, the Comcast regression did not show the desired results, so the report did not show it. In addition, the report acknowledges (note 37) that its regression analysis produces inconsistent results for Charter.

The second elementary mistake, related to the first, was the repeated confusion between correlation and causation. For example, the report asserts confidently that “San Diego has the most limited competition, with many neighborhoods served by a monopoly gigabit provider.” But as the table below shows, San Diego has half the population density of the other three cities, as well as being hillier. So it might be more accurate to say that out of the four cities, San Diego is the costliest, on a per-household basis, to lay fiber and cable, leading to fewer providers and higher promotional prices. Causality is very different than correlation.

San Mateo Oakland Los Angeles  San Diego
Population density of city (people/sq mile) 8710 7878 8311 4256
Population density of surrounding county (people/sq mile) 1704 2280 2468  784

Data: Census Bureau

Third, the report ignores intense competition in sub-gig markets, which many subscribers are intentionally choosing. As a result, the report only focuses on the four top fixed broadband providers, even though satellite providers, such as Starlink, and 5G internet providers, such as T-Mobile and Verizon, are available in many locations covered by the report. 

Finally, we come to the impact of income, which is the one non-competition variable in the report’s analysis. On its website, the PAO alleges digital discrimination in the California broadband space. However, the analysis in this report shows “providers do not systematically adjust promotional pricing based on income levels” (p 16). Truthfully, the report could have been entitled “No Digital Discrimination Found in California’s Urban Broadband Markets.”

The CPUC should be careful not to rely on this flawed study to make any policy judgments related to broadband. Rather, we should take comfort that the increasingly competitive marketplace for broadband is benefiting consumers. 

What Policymakers Can Learn from Japan and the EU on Mobile Platform Regulation

Introduction

Late last year, Japan joined the recent wave of countries attempting to regulate smartphone platforms such as iOS or Android when its new Mobile Software Competition Act (MSCA) went into effect. Like similar statutes around the world, the new legislation aims to give consumers more choices when it comes to how and where they purchase apps, while improving access for third-party developers.     

But compared to some of its international peers, Japan has pursued a more smartly tailored regulatory approach that should be seen as a model. It stands in especially stark contrast to the European Union’s overbroad Digital Markets Act (DMA), avoiding many of the safety and security issues that have been created by the EU’s effort. Here are four key areas where the Japanese and European approaches differ, underscoring how the MSCA offers a better approach. 

Security and User Choice

How to give consumers more choice about where they purchase apps without compromising their security is one of the most important questions in mobile platform regulation. Europe and Japan have taken vastly different approaches to the issue.

Under Europe’s DMA, platforms must allow users to download apps directly from third-party websites, without intermediaries providing protections or checks on in-app content. This approach creates substantial security risks, allowing for the distribution of malware or other content that skirts existing security reviews conducted by platforms.

Under the law, platforms are also required to let developers use an alternative payment system — so, for instance, your favorite music streaming app could direct customers to its own payment system and entirely bypass Apple or Google’s, which might charge the companies a fee.  As a result, users might no longer be able to use trusted platform payment options and instead be forced to share their financial information with an unfamiliar firm. The tools and protections that users have come to know and trust (like refund and fraud monitoring) could be removed as a result of poorly designed attempts to increase user choice. 

By contrast, the MSCA takes an approach that protects security while still encouraging user choice. By permitting measures “ensuring cybersecurity for smartphone use,” mobile platforms can ensure that alternate app stores include security protections or block criminal content. Meanwhile, alternative payment systems must appear alongside platform payment systems, expanding user choice with flexibility rather than reducing it. 

The MSCA’s approach acknowledges that competition regulation should expand the choices users have, not eliminate existing ones or push users towards less secure alternatives. 

Protecting Kids

Policymakers around the world are grappling with a difficult question: how do we keep young people safe online in an increasingly digital world? There are no easy answers to this question. But at the very least, online competition rules shouldn’t make it more difficult to protect kids. 

Unfortunately, the DMA, passed in 2022, fails on this front. For starters, it does not include explicit protections for minors. As a result, children are treated the same as adults when it comes to the DMA’s rules on alternative app distribution and payment services, discussed above. Since the law forces platforms to allow relatively free access to sites and systems outside of the platform owner’s control, efforts to restrict the content that kids can access could be considered a violation of the DMA. For example, age restrictions for apps distributed outside the app store may not be permissible under the DMA; platforms have already been forced to allow apps containing explicit content to be installed through alternate app stores. Lacking a carveout for youth protections, kids could be left with unmitigated access to explicit or harmful content online.

Unlike the DMA, the MSCA permits measures “safeguarding youth who use smartphones.” Measures that might face legal challenges under the European approach can remain in place in Japan. Tools like limiting transaction links in apps designed for children, restricting access to alternative markets through controls at the operating system level, and limiting targeted advertising to minors are all possible through the MSCA, empowering parents. 

As PPI has previously explored, parents are strongly supportive of these sorts of controls, which enable them to make decisions about their kids’ online access. The MSCA’s approach does not resolve all of the difficult questions about kids’ online safety, but it provides the flexibility needed to maintain existing safeguards while still preserving competition. 

Interoperability and Privacy

Interoperability — allowing third parties to interface with a platform’s systems and data— is an admirable strategy for strengthening competition and helping users get more out of their devices. But it is not without tradeoffs: integration inherently requires control and access to potentially sensitive data. Competition regulation should be selective about where and how interoperability is mandated in order to maximize the benefits for users while maintaining safety as much as possible.

The European approach forces platforms to provide third parties with sweeping access to user data for interoperability purposes. For example, mobile platforms have faced requests to hand over the full contents of users’ notifications or the history of Wi-Fi networks they have connected to, regardless of how the third party intends to use the data. Notification contents could expose two-factor authentication codes or private details, while Wi-Fi history could reveal where and how a user is spending their time. With no option for platforms to deny requests for sensitive user information, third parties may maliciously harvest and monetize data for their own gain, all while consumers remain unaware of risks. These overly broad interoperability mandates harm user privacy and could eventually erode trust in platforms,  hurting the market for all developers.

Japan’s competition law takes a narrower approach to interoperability access. The MSCA requires that requests remain “proportionate to the competition related problems at hand” and allows platforms to reject inappropriate data access attempts. Platforms can also reject requests from parties who are legally required to share collected data with foreign governments, keeping user data safe. These measures mean that opportunities for interoperability, which benefit users, can remain in place, while those that exploit them are rejected.  

Innovation and Intellectual Property

Governments also need to balance their desire to encourage competition with the need to incentivize innovation. Requiring platforms to share features or access with competitors can provide users more choice, but it also weakens the returns from research and development. This can lead to stagnation as companies find themselves unwilling to invest in innovation. In designing competition regulation, policymakers are forced to make a choice about how to strike this balance, and the DMA and MSCA represent meaningfully different answers with real consequences for users.

The DMA’s “interoperability by design” approach means that platforms are often required to share features and IP with third parties without compensation, including early notice about coming updates. This gives competitors valuable insight into platforms’ future plans, and introduces significant costs for platforms to make new systems and features compatible while competitors’ development costs are effectively subsidized. 

These misaligned incentives mean that platforms may withhold or delay new features, leading to a worse experience for users. Recently, Apple has delayed iOS features like Live Translation or withheld others like iPhone Mirroring entirely in Europe as a result of the DMA. Apple argues that because of concerns over privacy and compliance with interoperability requirements, such delays are likely to continue. Today, European users have a limited product compared to their international counterparts, not due to technical limitations but because of the high costs of legislative compliance.

Japan’s “proportional interoperability” approach is narrower and contains protections for “legitimate exercise of intellectual property rights,” including the ability for platforms to charge for interoperability access. Platforms maintain the right to evaluate whether to implement interoperability access on some features, allowing them to more effectively use resources and preserving incentives for R&D investment. Tellingly, Japanese users have so far not faced the same feature delays or limitations that European users have. The results show that the MSCA’s proportional approach can still address genuine competition concerns without severely damaging incentives for innovation.

Conclusion

The DMA and MSCA present two significantly different approaches to increasing competition in the smartphone ecosystem. The DMA’s hardline, no-exceptions approach has lofty ambitions, but has already led to negative tradeoffs for consumers, including reduced protections for minors, privacy risks with interoperability, and delayed features. While the full impact of the still-young MSCA remains to be seen, its moderate approach appears poised to avoid the same pitfalls that have hampered the DMA to date. 

As both laws mature and their impacts are fully understood, the comparative outcomes will be instructive for policymakers around the world considering similar legislation. Effective competition policy should expand user choice without major sacrifices to security, privacy, or other protections that users value, and Japan’s efforts show this balance is achievable.

POPA vs. ASAA: The Right Path Forward for Kids Online Safety

Congress is moving swiftly ahead on legislation that would require smartphone apps to verify the ages of their users in order to protect children’s safety online. But with full markups on several bills scheduled for the coming weeks, lawmakers face an important choice between competing approaches.

Among the leading bills, H.R. 6333, the bipartisan Parents Over Platforms Act (POPA) introduced by Reps. Jake Auchincloss (D-Mass.) and Erin Houchin (R-Ind.) stands in strong contrast to H.R. 3149, the App Store Accountability Act (ASAA) introduced by Rep. John James (R-Mich.). Though both bills aim to safeguard kids through requirements on app stores, POPA stands out as a more practical, privacy-forward, and parent-aligned approach. Here are four key policy areas where the two bills differ. 

Shared Responsibility

Checking an app users’ age is a complex task, and responsibility for it should fit with the actual roles of app developers and stores. While both bills enlist mobile app stores in the age assurance process, ASAA establishes onerous requirements that would require them to collect government IDs from all users — minors and adults alike — regardless of what kind of app they want to download.

While app stores may be well-positioned to use basic age information to limit the apps younger people can download, they have little control or knowledge of what happens once an app is installed. The ASAA would make the point of download the only major check on online safety, with app developers holding minimal responsibility for providing safer experiences after their product is loaded onto a child’s phone. This strategy could inadvertently lead to poorly applied restrictions with content minimally tailored to be age appropriate and little control of how apps are actually used.

By contrast, POPA proposes a shared responsibility across the ecosystem, requiring app stores to conduct age checks at the point of download and developers to do the same when consumers use certain parts of their apps. As PPI wrote last month, parents believe strongly that verification should go beyond a one-time check. 

Consent Fatigue

For parental consent to be meaningful, it must be sustainable: Too many requests lead users to accept terms without reading them, much the way most of us now automatically click through the ubiquitous cookie consent banners websites started displaying following enforcement of Europe’s GDPR. This phenomenon, known as consent fatigue, should be carefully considered in the design of an age verification framework.

POPA gives parents tools to manage kids’ access without becoming overly frequent or demanding. For example, parents are able to restrict access by category or age rating, rather than at every app download. Giving parents these tools means they still can still make meaningful decisions about their kids’ activity without being inundated by routine approvals that might cause them to tune out. 

Though designed with good intentions, ASAA’s approach is much more likely to overwhelm parents. The bill would require app stores to receive parental consent during each and every download request. Even if parents decide that their kids are ready to download some kinds of apps independently, ASAA does not provide a mechanism to let them do so. And with requirements to receive additional parental consent after “significant changes” are made to any app, requests are likely to be frequent.

Parental Control & Data Sharing

When handling sensitive personal information, privacy and choice should be foundational priorities. The age assurance process should strive to minimize data collection and sharing, obtaining only the information needed for age assurance and nothing beyond.  While POPA gives parents the agency to decide when and where their child’s age is shared, ASAA mandates broad sharing without consent. 

ASAA requires all new users to undergo age verification, and developers of all apps – even those without age restricted content — to receive information about all users’ ages by default. This approach violates the principle of data minimization and puts all users at risk. Even if a user wishes to download an app that does not include age-restricted content, like a notetaking app or their favorite coffee shop’s app, they will still be required to undergo the age verification process. Parents and other users have no choice over the collection and sharing of their information.

POPA takes a narrower approach, allowing users to declare their age and giving parents the ability to choose to share age information with developers. While the bill encourages app stores to use techniques like age estimation to provide age assurance, it does not require it. 

Legal Considerations

Crucially, the concerns over scope and applicability considered in this piece are not purely speculative. Last month, a federal judge temporarily blocked Texas’s state-level age verification law on the grounds that it was “exceedingly overbroad” and “unconstitutionally vague.” If Congress is serious about protecting children and giving parents choice, it should pursue a legally durable approach that can withstand the same first amendment challenges that the Texas law faced. POPA’s measured scope, with a defined set of covered applications and focus on parental consent and choice, appears up to this legal scrutiny. As markup approaches, lawmakers now have an opportunity to advance legally durable and practically designed age assurance legislation. Congress should choose POPA. 

PPI Calls for New National Autonomous Vehicle Safety Reporting Framework

WASHINGTON — The Progressive Policy Institute (PPI) today released a new report highlighting the need for a national approach for autonomous vehicles (AVs), especially as most regulation is fragmented between states.

The report, titled “Building Trust Through Transparency: A New Federal Framework for Autonomous Vehicle Safety,” and authored by PPI’s Andrew Fung, Senior Economic & Technology Policy Analyst, Alex Kilander, Policy Analyst with PPI’s Center for Funding America’s Future, and Aidan Shannon, PPI Policy Fellow, comes at a time when autonomous vehicles like Waymo continue to traverse American streets in record numbers. While these vehicles can drastically improve road safety, there is a lack of public trust surrounding AVs, threatening the industry’s expansion.

“Public perceptions of autonomous vehicles are still being shaped more by isolated incidents than by comprehensive data,” said Fung. “A unified national safety framework would replace anecdotes with evidence, giving regulators, companies, and the public a shared foundation to assess performance and build trust as the technology scales.”

The report calls for a two-layer approach:

  1. A public-facing dashboard that shows AV crash rates and safety comparisons between AVs and human-driving vehicles
  2. A granular, comprehensive database allowing regulators to gain access to comprehensive, standardized safety statistics needed for rigorous oversight

“Smart, standardized transparency can shift the autonomous vehicle debate from speculation to evidence,” said Kilander. “That shift is critical to improving AV regulation while allowing responsible innovation to move forward.”

Read and download the report here.

Founded in 1989, PPI is a catalyst for policy innovation and political reform based in Washington, D.C. Its mission is to create radically pragmatic ideas for moving America beyond ideological and partisan deadlock. Find an expert and learn more about PPI by visiting progressivepolicy.org. Follow us @ppi.

###

Media Contact: Ian O’Keefe – iokeefe@ppionline.org

Mandel for The Hill: Local news has been pummeled by change. How AI can help.

The list of troubles facing local news operations seems to go on forever.

The rise in big box stores and ecommerce has made local newspaper retail advertising almost superfluous.

The long-term decline in the population of small cities like Cairo, Illinois has narrowed the subscriber base of many local papers, forcing closures and consolidations.

And the fall in the price of newspaper advertising — an analysis of Bureau of Labor Statistics data by my organization, Progressive Policy Institute, shows it’s down 15 percent since 2019 — has undercut the traditional business model of local news even further.

Read more in The Hill. 

Mandel in the Wyoming Star: EXCLUSIVE: The Great Build-Out. Part 3. Economics of Data Center Construction.

Dr. Michael Mandel, Chief Economist and Vice President at Progressive Policy Institute, by contrast, leans into the idea that this build-out is more like building railroads than building Pets.com:

“We’ve gone through a long period where “physical” industries such as agriculture, construction, manufacturing, and much of mining have stagnated compared to digital industries. This stagnation in physical industries has especially hurt states such as Wyoming, which has barely grown since 2019.

AI has the potential to transform physical industries, boosting productivity and incomes and opening up new markets. AI will be especially beneficial to states such as Wyoming, which has shown no productivity growth over the past 15 years.

The growth of AI requires investment in large-scale data centers. Data centers are necessary, both to train the underlying models and to power the applications. This investment is no different, conceptually, from laying down rails for trains or drilling for oil. You need to spend on technology to get the benefits of technology, especially when dealing with the complications of the real world.

Indeed, China is pouring hundreds of billions into advanced technology industries, including AI. In this context, the US wave of data center construction and grid modernization looks like a necessity rather than an optional choice.”

Read more in the Wyoming Star. 

PPI Unveils AI Innovation Toolbox to Help Governors Compete in the Emerging Tech Economy

WASHINGTON — A new report from the Progressive Policy Institute (PPI) unveils an AI innovation toolbox for governors: a practical policy framework designed to help states compete for investment, jobs, and leadership in the rapidly expanding artificial intelligence economy. Authored by Dr. Michael Mandel, Vice President and Chief Economist at PPI, “An AI Innovation Toolbox for Governors” identifies five strategic levers (tax incentives, smart energy policy, university partnerships, workforce training, and AI extension programs) to help states attract AI investment, boost productivity, and create well-paying jobs.

With U.S. businesses investing over $700 billion annually in software and tech giants spending $180 billion in AI-related capital outlays in the first half of 2025 alone, the report warns that states focused solely on regulating AI risk are missing out on transformative economic growth.

“Governors once competed for auto plants. Today, they must deploy a full AI innovation toolbox to attract cutting-edge businesses and build durable growth,” said Mandel. “That means creating the right incentives, ensuring grid readiness, training workers, and helping small businesses adopt AI to remain competitive.”

The brief highlights forward-looking state initiatives in New Jersey, New York, Florida, and others as models for attracting AI-related investment and spurring innovation. It also calls for states to move quickly, citing a “window of opportunity” that may close as industry leaders finalize siting decisions for data centers and research hubs.

Key policy tools detailed in the report include:

  • Tax incentives to attract AI startups and data centers
  • Smart grid investment and demand-side energy policies to accommodate rising electricity needs
  • University-business partnerships to drive AI research and workforce development
  • Career technical education and retraining for AI-related job growth
  • State-sponsored AI extension programs to help small and midsize businesses integrate AI
Governors have a narrow window to act. Those who leverage the full AI innovation toolbox will be better positioned to drive productivity, modernize key sectors, and deliver rising incomes in their states.

Read and download the report here.

Founded in 1989, PPI is a catalyst for policy innovation and political reform based in Washington, D.C. Its mission is to create radically pragmatic ideas for moving America beyond ideological and partisan deadlock. Find an expert and learn more about PPI by visiting progressivepolicy.org. Follow us @PPI

###

Media Contact: Ian OKeefe – iokeefe@ppionline.org

An AI Innovation Toolbox for Governors

Should state governors act now to capture their share of the tech/ AI investment boom? The answer is unequivocally yes. By many measures, the economic heft of the software and related industries now matches or exceeds that of the motor vehicle industry, a traditional target of state economic development efforts. In 2024, U.S. businesses invested $700 billion in software, about equal to consumer spending on motor vehicles. In the first half of calendar year 2025, Amazon, Alphabet, Microsoft, Meta, Oracle and Apple laid out a stunning $180 billion in total capital expenditures, primarily in AI-related structures and equipment.

To put these numbers in perspective, this tech and AI investment surge dramatically overshadows domestic investment from major manufacturing industries. The motor vehicle industry invested just $29 billion in structures and equipment across all states in 2023, while the primary metals industry, including steel and aluminum, invested only $15 billion.

Governors who attracted high-wage auto assembly and parts plants to their states in the 1980s and 1990s were hailed as economic heroes. They used economic development tools like tax incentives and worker training subsidies to lower the cost and riskiness of making such large investments. At the same time, smaller businesses were supported through manufacturing and agricultural extension programs, which helped them keep up with new developments. The economic literature suggests that the benefits of these policies, on average, substantially exceed the costs.

Today, governors are putting together a new “AI innovation toolbox,” analogous to the economic development tools of the past. Tax incentives, employed wisely, can be used to attract AI startups and data processing centers to boost state economies. Smart energy policy, including faster approval of new grid investments, demand side management and long-term capacity commitments, can better match electricity generation and transmission upgrades to AI, industrial and transportation demand, and minimize the impact on retail rates. Governors can leverage their state’s public and private universities to develop and attract AI-focused businesses. Worker training subsidies and AI-focused career technical education can ensure that existing workers are not left behind. AI “extension programs” can accelerate the adoption of AI by small businesses, making state industries such as manufacturing, agriculture, and construction more competitive, and creating more demand for AI-enabled workers.

Read the full report.

 

New PPI Report Warns: Private AI Lawsuits Threaten Innovation, Urges States to Reject “Litigation for Profit”

WASHINGTON — As states rush to regulate artificial intelligence (AI), a new white paper from the Progressive Policy Institute (PPI) warns that allowing private lawsuits without proof of harm would derail innovation, empower trial lawyers, and undermine responsible governance. “Artificial Intelligence, Not Artificial Litigation,” by PPI Senior Fellow Philip S. Goldberg and AI governance attorney Josh Hansen, urges lawmakers to reject a rising trend: giving private, for-profit attorneys sweeping power to enforce new AI laws.

The authors warn that so-called “private rights of action” would enable speculative lawsuits over AI use, often by uninjured plaintiffs, enriching lawyers at the expense of developers, startups and American competitiveness.

“Law enforcement, particularly over emerging technology such as AI, requires prosecutorial judgment, where governments can investigate the facts and take appropriate steps to protect the public, not for profit lawsuits where private lawyers leverage the regulations to enlarge their own wallets,” said Goldberg. “History has shown that if states turn AI law enforcement over to private attorneys, it is going to incentivize litigation abuse. If we want to lead the world in AI, we must reject legal frameworks that punish innovation instead of protecting consumers.”

Goldberg and Hansen draw lessons from decades of lawsuit abuse under laws like the Telephone Consumer Protection Act and Illinois’s Biometric Information Privacy Act, where lawyers exploited technicalities to extract massive paydays. They argue that similar AI-related provisions would unleash a torrent of class actions untethered from any actual injuries.

Instead, the authors call for:

  • AI law enforcement led by state attorneys general, not private litigants;
  • A clear separation between lawsuits that are intended to compensate wrongfully injured individuals and those needed to police compliance;
  • Strong guardrails to protect AI developers from speculative or duplicative lawsuits.

The report cites bipartisan examples, such as California, Virginia and Colorado, where lawmakers are prioritizing regulatory clarity and flexibility over legal uncertainty. It also highlights how existing laws already empower consumers to sue over AI-related harms, from discrimination to privacy violations, without inviting frivolous claims.

“The stakes on the race over AI are too high to let private lawyers turn AI law enforcement into a for-profit game of bounty hunting,” said Goldberg. “We need clear rules and fair oversight, with the public sector, not private contingency fee lawyers, leading the charge.”

Read and download the report here.

Founded in 1989, PPI is a catalyst for policy innovation and political reform based in Washington, D.C. Its mission is to create radically pragmatic ideas for moving America beyond ideological and partisan deadlock. Find an expert and learn more about PPI by visiting progressivepolicy.org. Follow us @PPI

###

Media Contact: Ian OKeefe – iokeefe@ppionline.org

Court Highlights DOJ Overreach and Refocuses on Consumer Welfare in Deciding Remedies in Google Search Monopolization Case

Judge Mehta of the District Court for the District of Columbia yesterday issued a long-awaited decision on remedies in the Google Search monopolization case. The 230-page opinion is both a nod to Google’s proposed remedies, but with modifications, and a reining in of overreach by the U.S. Department of Justice (DOJ) in proposing to restructure and quasi-regulate the online search market.

Regardless of where different stakeholders come out on Judge Mehta’s opinion, one thing is clear. The remedies adopted by the court tell us a lot about antitrust’s emerging role in the digital sector. The decision reinforces the importance of antitrust enforcement in promoting competition in digital markets. But it also conveys a strong message about its limitations and why the courts are ill-suited to engage in ongoing enforcement of regulatory-style remedies in a complex and dynamic sector.

To be clear, the decision places significant restrictions on Google moving forward. It prohibits entering into or maintaining exclusive contracts for the distribution of Google Search, Chrome, Google Assistant, and the Gemini app. It also requires Google to share narrow sets of search index and user-interaction data and provide search syndication services for search and search text ads to “qualified competitors.”

These conditions target the conduct that fostered Google’s dominance in online search, as the court established at the liability stage in 2024. At the same time, the conditions also help rivals achieve the scale necessary to compete, but for a much shorter time period than requested by the DOJ. In doing so, the goal of the opinion is clear: address the competitive harm, open the search market to competition, and apply pressure on rivals to innovate quickly.

The remedies proposed by the DOJ and rejected by the court are also revealing. For example, Google is not required to divest the Chrome browser or Android. The decision also allows payments to distributors for pre-loading or placement of Google search products; rejects a requirement for Google to share granular query-level data with advertisers; does not require mandatory choice screens; and declines to impose anti-retaliation, anti-circumvention, and self-preferencing conditions.

In whittling down the DOJ’s 20-plus proposed remedies covering bans on contracts, divestitures, and regulatory oversight of Google’s search platform, Judge Mehta’s decision exposes several important themes.

First, it is hard not to notice the court’s conclusion — repeated many times over — that many of the government’s proposals are not “tailored to fit” or “unrelated to” Google’s anticompetitive conduct in online search. This sends a clear message that plaintiffs should stick to proposed remedies that address specific antitrust violation(s) and avoid those that are designed to achieve broader public policy goals in a market.

Second, you cannot miss the court’s focus on the impact of the DOJ’s proposed remedies on consumers — something that the government largely overlooked — and that PPI emphasized in its April 2025 report Antitrust Remedies and U.S. v. Google: Putting the Consumer Back into the “Fix.” The opinion highlights the importance of the consumer welfare standard in stating that the court “…must be sensitive to remedies that risk substantially stifling technological innovation or impairing consumer welfare,” and “…if one or more of these adverse market impacts were to come to pass, it would harm consumer welfare.”

Indeed, consumer welfare features prominently in Judge Mehta’s reasoning behind rejecting the DOJ’s proposal to require divestiture of the Chrome browser. To wit, “…the court is highly skeptical that a Chrome divestiture would not come at the expense of substantial product degradation and a loss of consumer welfare.” In refocusing the antitrust lens on consumer welfare, the decision also identifies user privacy and data security as a prominent consumer welfare issue.

For example, the court bases “…the release of less than the full datasets…” on the need to promote user privacy. Similarly, the decision requires modifications to the makeup of the Technical Committee to “…address the important data privacy and data security issues arising from the Search Index and User-side Data remedies.” In digital markets, where the currency of exchange is user information, not dollars, product quality and user privacy are central to antitrust’s effects-based analysis under the prevailing consumer-welfare standard. The opinion amplifies the relevance of this fact.

Last but not least, the decision opens by framing a critical reality for antitrust enforcement in the digital sector. Namely, the pace of innovation — and especially GenAI technology — is having a transformative impact on online search. A remedy must, therefore, consider the implications. For example, Judge Mehta’s decision explains the need to modify DOJ’s data-sharing proposals “…to mitigate their impact on Google’s and competitors’ innovation incentives.” The court also shortened the duration of the DOJ remedy because the “…10-year term runs the risk of growing stale in these fast-moving times, where GenAI technologies are breaking barriers seemingly at light speed.”

While there are many takeaways from the Google search remedies decision, two are likely to have significant staying power. One is that the pace of innovation — juxtaposed with the relatively slow pace of antitrust litigation — poses ongoing challenges in the digital sector. A second is that the consumer welfare standard remains alive and well. As PPI noted in Antitrust Remedies and U.S. v. Google: Putting the Consumer Back into the “Fix,” “The District Court has the unique opportunity to ensure a strong remedy that restores competition while striking a better balance to protect consumers under the consumer welfare standard.” The opinion achieves this important goal.

AI and the Future of Local News

In the face of prevailing industry headwinds, many local news outlets around the country have shuttered. Their continued overreliance on an advertising-based revenue model, a vestige of the pre-internet times, leaves them vulnerable to further decline. Just in the last year, 130 more newspapers folded — a rate of two and a half a week — leading to a total loss of over 7,000 newsroom jobs. At the same time, others who took note of changing consumer preferences for the digital over print format saw a net increase of 105 outlets. To not just survive, but thrive in today’s fast-moving news media landscape, where consumers lean towards new formats such as short-form video, local outlets must embrace a tech-forward attitude. This means adopting the newest tools on the block, including AI, that advance newsroom productivity and capacity for producing high-quality, public interest journalism.

Building networks of trust among communities and sustaining vibrant information ecosystems are non-negotiables for healthy democracies. When local news deserts proliferate, communities experience diminished civic engagement: lower voter turnout, fewer contested races, and poorer public participation. It’s no secret either that the loss of reliable sources of information exacerbates urban-rural polarization. National news media rarely covers localities, and even less so when they are located in rural areas. Simultaneously, the most prolific outlets are also often the least attuned to the situation on the ground in most of working America, undermining valuable discourse in overlooked communities.

Reversing these trends requires an honest reckoning with the multi-faceted challenges that local newsrooms face. Their tenuous fiscal situations leave them under-resourced on several fronts. Local reporters must juggle multiple beats at a time, covering city council hearings, school board meetings, and small business openings, all the while performing serious investigative work. This workload makes devoting the requisite attention to unearthing and covering scoops well extremely difficult. Understaffing also inhibits other critical newsroom functions, such as fact-checking, translating, and distributing published stories. Without these supporting activities, local news risks its journalistic credibility and loses reach into communities. The shift to digital further exposes key skill deficiencies in local newsrooms, namely expertise in audience analytics and web design, that inhibit their ability to cater to relevant audiences. That’s especially important at a time when more young people are turning to social media for information that was once the bread and butter of a local newsroom — restaurant recommendations, classifieds, community events, and sporting fixtures.

Given the status quo, local newsrooms should welcome artificial intelligence as a tool for increasing high-quality coverage. AI holds enormous potential for local outlets because it excels at many functions that newsrooms currently lack due to budget cuts. For one, AI thrives when tasked with pattern recognition, which is highly useful for investigations that require journalists to process large datasets, such as troves of municipal documents. Another area of strength is AI’s aptitude for automating already standardized tasks like translation. For local outlets that serve multilingual audiences, accessible translations may meaningfully increase public engagement with existing coverage. 

AI may also help newsrooms scale their digital presence to better deliver content in the format that most audiences now prefer, including generating audio and video versions, summaries, and visual explainers. When local news media are able to engage well with their audiences online, their reach significantly expands. AI can help local newsrooms level the playing field in the face of upstart new competitors (like news creators on Instagram), and the larger hedge fund-owned conglomerates (who can invest in digital expertise and apportion the cost across multiple mastheads).

Indeed, it’s clear that newsroom productivity and the development of new capabilities are critical issues given new competitive dynamics and shifting consumer expectations. Consumers have a preference for new formats, as shown by the rise of TikTok as a source of news and the continuing long-term decline in consumers’ use of publisher-owned and operated platforms (e.g., websites and apps) in favor of social media. 

At the same time, due to staffing shortages, local reporters have a lot more on their plates than in the past. For example, local reporters covered an average of 3.8 different levels of government in 2000 (e.g., cities, counties, school boards, special districts, townships), which increased to 10 different levels of government in 2020. When they are already spread so thin, building trustworthy sourcing, chasing new leads, parsing through troves of documents, or otherwise pursuing time-consuming investigations becomes a near impossibility. 

With the acknowledgement that local newsrooms are under-resourced, AI can play a crucial role in expanding their investigative capacity. Tools like LocalLens, an AI-powered application launched in 2023 that automatically transcribes and summarizes local government meetings, allow journalists to cast their net far and wide for potential story ideas. LocalLens has also helped reporters connect with sources, such as student speakers at school board meetings, whom they might not have found on their own. Another example is the Associated Press’ 2024 launch of LocalLede, which uncovers relevant regulations from over 430 federal agencies for local jurisdictions. LocalLede can serve as a useful starting point for writing stories, flagging new federal announcements like changes to Earned Income Tax Credit (EITC) thresholds. Other AI tools have further enriched investigative journalism, helping reporters process large amounts of public records, including campaign finance disclosures, civil complaints, and municipal budgets.

AI tools also streamline repetitive tasks, allowing local news outlets to redirect limited resources to where they are needed most. Their pervasiveness in copy editing at major publications, for fact-checking and grammar and style corrections, should serve as a model for local newsrooms that still perform these tasks by hand. Natural language-processing-based services like Otter.ai save reporters countless hours whenever they need written transcriptions after conducting interviews. AI has also shown promise in automating translation work. When wildfires swept across Los Angeles in January 2025, the Boyle Heights Beat used a beta version of English-to-Spanish translation GPT to make their coverage and social media updates available to Spanish-speaking members of the community. 

For local newsrooms falling behind in their digital offerings due to the lack of technical expertise on staff, AI provides a means to catch up with larger counterparts. THE CITY, a non-profit outlet that covers all boroughs of New York City, currently offers AskNellie, an AI assistant that answers readers’ questions ranging from rent regulations to upcoming elections by connecting them with previous coverage. In 2024, the publication previously used artificial intelligence to create an online map of its areas of coverage, which provided transparency to audiences about whether or not they were sufficiently documenting underserved communities. ARLNow, a local Northern Virginia outlet, increased online engagement through publishing an automated daily newsletter, which they lacked the staff to compile before using AI to do so. 

While artificial intelligence has the potential to significantly improve local news media, it is important to approach its rollout with prudence. Failing to think through the specifics of applying AI to specific use cases will result in conspicuous misfires. For example, shortly after launching AI-authored high school sports coverage at several of its local news outlets, including the Columbus Dispatch in 2023, newspaper conglomerate Gannett pulled the project due to widespread backlash from readers for poor quality post-game summaries. Also in 2023, at CNET, editors had to issue numerous corrections to financial advice articles written by AI that contained obvious factual errors. Sports Illustrated showed poor judgment in November 2023 when the publication added fake journalist names and profiles to AI-generated content. When used judiciously, AI is a valuable tool for revitalizing local outlets, bringing them into the modern news media landscape, but it cannot be applied as a blanket solution without thoughtful consideration.

Ultimately, local news must keep up with the times. Embracing AI enables local outlets to make the most out of their limited resources — continuing to produce high-quality investigations, reach larger audiences, and put out competitive digital offerings. While local newsrooms may not completely reverse their decline anytime soon, adopting a tech-forward attitude will at least help them take steps in the right direction, ensuring that they can serve as more robust informational backbones for local communities who rely on their coverage.

New PPI Report Recommends Three-Year Moratorium on State-Level AI Regulation

WASHINGTON  —  As artificial intelligence (AI) rapidly reshapes the global economy, the Progressive Policy Institute (PPI) is calling on Congress to enact a temporary, three-year moratorium on state-level AI regulation to pave the way for a comprehensive federal framework. 

In a new report, The Case for a Targeted AI Moratorium,” Senior Economic & Technology Policy Analyst Andrew Fung argues that a patchwork of conflicting state laws risks stifling innovation, raising compliance costs, and repeating the federal failures seen in data privacy regulation.

The report details how more than 700 AI-related bills were introduced in state legislatures last year, with 26 states already enacting varied and often conflicting rules. These range from requirements for watermarking AI-generated content to limits on digital replicas and employment-related algorithms. According to Fung, this rapid and disjointed activity threatens to entrench a Balkanized regulatory landscape that increases burdens for businesses, confuses consumers, and reduces the likelihood of coherent national legislation. Fung draws a clear parallel to the U.S. experience with data privacy, where Congress’s failure to act early left Americans with inconsistent protections and companies facing billions in compliance costs.

The report arrives as Congress considers a reconciliation bill containing a provision for a 10-year moratorium on state AI laws. Fung opposes its inclusion in the reconciliation process and views that duration as excessive, but supports a shorter, strategic pause that would allow federal lawmakers time to craft thoughtful, uniform rules.

“Getting AI regulation right is essential to both U.S. competitiveness and consumer protection,” said Fung. “A short-term federal pause gives Congress breathing room to act before states lock in divergent and duplicative frameworks that would be nearly impossible to harmonize later.”

Drawing parallels to privacy regulation — where fragmented state laws derailed bipartisan federal efforts — the report highlights the dangers of legislative inertia. Already, more than 700 AI-related bills have been introduced in state legislatures, and 26 states have enacted varying rules, sowing confusion for innovators and regulators alike.

Key takeaways from the report include:

  • A three-year moratorium on state AI regulation is recommended to give Congress time to develop a comprehensive federal framework.
  • State-level AI laws are proliferating rapidly, with more than 700 bills introduced and 26 states enacting legislation as of Spring 2025.
  • Fragmented state regulations create compliance challenges, suppress innovation, and increase the political difficulty of passing federal legislation.
  • The U.S. experience with privacy regulation offers a cautionary tale: Congressional delays can entrench a patchwork of state laws that preclude national solutions.
  • A short-term federal preemption can give Congress time to design a modern AI regulatory framework that balances innovation, competition, and consumer protection.

Drawing parallels to privacy regulation — where fragmented state laws derailed bipartisan federal efforts — the report highlights the dangers of legislative inertia. Already, more than 700 AI-related bills have been introduced in state legislatures, and 26 states have enacted varying rules, sowing confusion for innovators and regulators alike.

Read and download the report here.

 

 

Founded in 1989, PPI is a catalyst for policy innovation and political reform based in Washington, D.C. Its mission is to create radically pragmatic ideas for moving America beyond ideological and partisan deadlock. Find an expert and learn more about PPI by visiting progressivepolicy.org. Follow us @PPI

###

Media Contact: Ian O’Keefe – iokeefe@ppionline.org