The Surface Transportation Board (STB) has resurrected a 2016 regulation on “reciprocal switching” that would require railroads to “unbundle” their transportation services and provide competitors with access to their infrastructure, at regulator-determined prices and service requirements. There are plenty of problems with this proposed regulation, including discouraging private sector investment and increasing operational problems. In this note, however, we will focus on the broader question of why forced unbundling of railroad transportation services is precisely the wrong regulatory strategy for today’s “Supply Chain Economy,” leading to the potential worsening of supply chain disruptions and an increase in inflation.
To understand why a 2016-vintage regulatory approach is totally wrong for the 2022 economy, we must first consider the underlying economics of supply chains. A supply chain consists of a flow of goods, of course, from producers to buyers and consumers, via transportation links such as railroads, container ships, airlines and truckers, and intermediaries such as importers and wholesalers. But equally important is the flow of data which allows all of this production and movement to be coordinated.
As I note in a forthcoming article in the Winter 2022 issue of The International Economy, it is better to think of a supply chain as a “supply-and-data chain.” In that spirit, supply-chain management has been defined by the Association of Supply Chain Management as the “design, planning, execution, control, and monitoring of supply-chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand and measuring performance globally.”
Today’s domestic and global economies are built around these “supply-and-data chains.” A retailer like Walmart uses its knowledge of expected U.S. consumer demand to place orders with factories around the world months ahead of when the goods are needed, and then coordinates the movements of these goods to its far-flung stores. At every point along the way, the goal is to use data to reduce costs and ensure a smooth flow of goods.
This “Supply Chain Economy” is very different than the classic picture of an economy consisting of a series of unbundled arms-length transactions. In an economy with forced unbundling, factories would have to commit themselves to production runs without knowing if the demand existed, and without knowing if the transportation capacity was available.
In a supply chain economy, companies compete on the basis of who can best use data to organize production and logistics across the global economy, lowering costs and increasing reliability. The key is to take a big picture view across a wide range of markets, rather than focusing on competition in individual markets.
From this perspective, forced “reciprocal switching” would divert resources away from the optimization of supply chains. Railroads would have to give a high priority to moving goods in a way that met the reciprocal switching requirements, rather than lowering costs and speeding goods to their ultimate customers. The result would be more supply chain disruptions, and higher inflation. That’s not an outcome that anyone wants right now.
The Federal Reserve made clear in its December 2021 meeting that it intends to raise interest rates in 2022. Interest rate changes flow through the economy and affect the rates borrowers pay on all types of loans. In particular, the increases in interest rates may place greater pressure on home mortgage rates and the credit scores that are used by financial institutions to determine who qualifies for loans.
In the area of housing finance, how credit scores are used by key market players has received attention for some time. The better the credit score, the more likely a borrower will qualify for a mortgage at the best possible rate, saving the borrower money over the life of the loan. There has been debate, however, over the models used to create those scores — should there be more competition and, more important, can new models lower costs for home buyers and ensure equity of access to loans.
Two of the most important entities in housing finance are the nation’s housing government sponsored enterprises — Fannie Mae and Freddie Mac (Enterprises) — which are now under government conservatorship overseen by Federal Housing Finance Agency (FHFA). As a result, many policymakers and elected officials have encouraged the FHFA to take steps to promote more competition in the credit scoring models used by the Enterprises to help lower costs to consumers and give greater access to credit for previously underserved individuals.
These are important goals and should be pursued. However, some reforms presented would have had a less than optimal effect — decreasing competition and potentially driving up mortgage costs rather than lowering them. The Enterprises have used a valid credit score model for over 20 years. Introducing competitive reforms has merit, but it must be done in a way that does not create unfair advantages. FHFA has a clear mandate to keep the Enterprises solvent and help homeowners, as witnessed by their recent COVID assistance. But FHFA must ensure that any reforms maintain competition and keep prices low for consumers.
This paper reviews how credit scores are presently used by the Enterprises and discusses some of the issues that can be addressed to keep competition in the credit score market. This paper also discusses some of the pitfalls associated with some proposed reforms to credit score markets.
ENTERPRISES HAVE USED PROVEN CREDIT SCORE MODELS FOR OVER TWO DECADES
Fannie Mae and Freddie Mac (Enterprises) are commonly known as housing government sponsored enterprises. Somewhat unique in their structure, they were originally chartered by Congress, but owned by shareholders, to provide liquidity in the mortgage market and promote homeownership.[1] The Enterprises maintained this unique ownership structure until their financial condition worsened during the financial crisis of 2008, when they were placed in government conservatorship under the leadership of the Federal Housing Finance Agency (FHFA).
The Enterprises do not create loans. They purchase loans made by others (such as banks), and then package those loans into securities which are then sold on the secondary market to investors. The loans purchased by the Enterprises can only be of a certain size and home borrowers must have a minimum credit score to qualify. The Enterprises use these and other criteria to minimize the risk that the loans they purchase will not be paid back (default) — an important step because it is this step of buying loans from banks and other lenders, thereby providing them with replenished funding that allows further home lending.
The loans purchased by the Enterprises then are packaged into securities that have specific characteristics which are told to investors — including the credit scores on the loans in the security. According to FHFA, the Enterprises use credit scores to help predict a potential borrowers likeliness to repay and has been using a score developed from a model, FICO Classic,[2] for over 20 years.[3] In discussing FICO Classic, FHFA points out that it “and the Enterprises believe that this score remains a reasonable predictor of default risk.”[4]
While the current system has been in effect for some time, Congress recently asked FHFA and the Enterprises to review their credit scoring model to determine if additional credit scoring models could be used by the Enterprises to increase competition. Specifically, FHFA was to “establish standards and criteria for the validation and approval of third-party credit score models used by Fannie Mae and Freddie Mac.”[5] Advocates of using alternatives to FICO Classic said, at the time, that using other validated credit scoring models would lead to more access.[6] While a worthy goal, incorporating a flawed new model, could have impacts and potentially drive-up costs.
CONFLICT OF INTEREST COULD LEAD TO DECREASED COMPETITION
Beginning in 2017, FHFA proposed a rule which would set the stage for reviewing the Enterprises’ credit score models. The rule FHFA finalized in 2019 directed the Enterprises to review and validate alternative credit models in the coming years.
Section 310 of the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018 (Pub. L. 115–174, section 310) amended the Fannie Mae and Freddie Mac charter acts and the Federal Housing Enterprises Financial Safety and Soundness Act of 1992 (Safety and Soundness Act) to establish requirements for the validation and approval of third-party credit score models by Fannie Mae and Freddie Mac.[7]
At the time of the proposed rule, some thought that alternative credit scores could open access to a larger group of homeowners.[8][9] While an admirable goal, and in keeping with FHFA’s mission for the Enterprises even now, a major issue was left unresolved. The proposed rule “would have required credit score model developers to demonstrate, upon applying for consideration, that there was no common ownership with a consumer data provider that has control over the data used to construct and test the credit score model.”[10]
The proposed rule would have created a separation between those who create and control the data, from those in charge of the model creating the scores — an important goal. Not surprisingly, the proposed rule received significant comments. Sadly, the final rule did not adopt this important provision which required those submitting models to not have a conflict of interest or “common ownership with a consumer data provider that has control over the data used to construct and test the credit score model.”[11] This lack of clear independence could set the stage for a lack of competition in the future.
While the rule was being proposed, former FHFA director Mel Watt in 2017 said, “how would we ensure that competing credit scores lead to improvements in accuracy and not to a race to the bottom with competitors competing for more and more customers? Also, could the organizational and ownership structure of companies in the credit score market impact competition? We also realized that much more work needed to be done on the cost and operational impacts to the industry. Given the multiple issues we have had to consider, this has certainly been among the most difficult evaluations undertaken during my tenure as Director of FHFA.”[12]
Several at the time of the proposed rule pointed out that having one dominant player possibly replaced by another, would not further competition but could further consolidate it. One commentator stated, “to push for alternative scoring models may simply trade one dominant player (FICO) for another (Vantage),”[13] in referring to legislation which would ultimately be incorporated into the bill where the proposed rule was developed. The Progressive Policy Institute (PPI) held an expert panel discussion at the time which also discussed the problems with adopting VantageScore due to conflict of interest.[14] “The reason? Because the owners of Vantage control the supply of information currently used by FICO to make its determination. And given the history of monopolies, it would not be surprising to see Equifax, Experian, and TransUnion use that leverage to the advantage of Vantage, and eventually force FICO out of business.”[15]
The proposed rule points out that “VantageScore Solutions, LLC is jointly owned by the three nationwide CRAs. The CRAs also own, price, and distribute consumer credit data and credit score. This type of common ownership could in theory negatively impact competition in the marketplace.”[16] Another writer at the time, also acknowledged the potential conflict of interest provision of the proposed rule.[17] While these issues were not resolved in the final rule, they still matter and can affect not only competition but also costs in the residential mortgage marketplace.
Competition is key to innovation and inclusiveness is important to further homeownership. Using alterative data, rent payments, utility payments, bank balances, all could potentially be used help complete the credit picture and increase access to credit.[18] Other research organizations have acknowledged that FICO has improved models and incorporated alternative sources of data that are available,[19] which would not have the conflict of interest that VantageScore would have. FHFA must ensure that competition is maintained, without creating unfair advantages.
LACK OF REAL COMPETITION COULD INCREASE COSTS
Before any changes can happen, however, FHFA must articulate all costs to consumers, lenders, the Enterprises, and investors of any change. COVID-19 proved a real-world laboratory for the Enterprises under stress. FHFA’s recent Performance Report lays out the series of actions the Enterprises took to help borrowers affected by COVID-19, including payment deferrals, forbearance, and evictions suspensions.[20] These actions likely kept many homeowners in their homes during a difficult period, and kept the Enterprises functioning. The relief provided was important and was balanced against the risk to the Enterprises — but it did come at a cost.
FHFA made their first announcement on COVID assistance to homeowners in March 2020.[21] A few months later in August 2020, FHFA announced that the Enterprises would charge a fee of 50-baisis points per refinancing to help make up for any potential losses the Enterprises might experience.[22] An initial estimate put the projected losses at $6 billion. Thankfully the Enterprises saw declining rates of loans in forbearance and the fee was ultimately ended in July 2021.[23]
Changes at the Enterprises have affects across the industry. Just as the potential increases in interest rates by the Federal Reserve this year could raise interest costs to home buyers, at time of the proposed rule, former FHFA Director Watt knew that changes to the credit scoring model could raise costs and even stated “much more work needed to be done on the cost and operational impacts to the industry,”[24] before changes were made. Clearly, the FHFA realizes that any changes to its credit scoring models will also likely have increased costs to the housing finance sector. As an aside, the related issue of changes to issues such as mortgage servicing have led to increased costs in the home purchase ecosystem.[25]
Changes to the credit scoring models could also affect prices in the secondary market for mortgage-backed securities (MBS) and credit risk transfers (CRT). As the FHFA pointed out, investors “in Enterprise MBS and participants in Enterprise CRT transactions would need to evaluate the default and prepayment risks of each of the multiple credit score options.”[26] While the FHFA in the final rule did not address the costs of these evaluations, incorporating multiple credit score options could raise the cost investors demand and ultimately increase the costs to home buyers via the fees the Enterprises would need to pass on.
Others have pointed out that changes to credit scoring models could have cost impacts for banks, investors, pension funds, and others.[27] These issues of cost and operational impacts need to be given serious consideration, because as the recent Enterprise actions related to COVID-19 made clear — they matter. The lending industry was upset when the Enterprises raised a temporary fee to help ensure Enterprises’ soundness through the difficult period.[28] What would the costs be with a wholesale change to the credit score model system? And who would ultimately pay those costs? These are questions the FHFA must address as they review any changes to the credit scoring model.
One of the FHFA’s current core goals is to “Promote Equitable Access to Housing.”[29] To ensure that the Enterprises can undertake their important role in addressing long standing issues of equity, they need to be in the best place possible financially to do that. A question that FHFA needs to address as they review credit scoring models is, would using a model with a conflict of interest hurt their goal of equity? Would changes raise prices or worse, limit access for those FHFA is looking to provide access into the market?
CONCLUSION AND QUESTIONS FOR CONSIDERATION
The crisis of COVID-19 and its effects on the housing market were serious, but thankfully not detrimental due to prudent planning and oversight of the Enterprises and FHFA. The Enterprises have used a current credit scoring model that has produced necessary liquidity in the market in both good and difficult times. As FHFA oversees the next phase of testing alternative credit score models, it should ensure that the models are subjected to the criteria laid out in their final rule — with emphasis placed on the cost and market affects any change would have. The Enterprises were called upon to help homeowners during the recent crisis and could do so with minimal disruption to the consumers and housing finance stakeholders. The Enterprises and FHFA should take seriously how any further changes would impact competition, soundness of the Enterprises, and how those changes could increase the costs for everyone in housing finance.
REFERENCES
[1]“Fannie Mae and Freddie Mac in Conservatorship: Frequently Asked Questions,” Congressional Research Service, July 22, 2020, https://crsreports.congress.gov/product/pdf/R/R44525.
[2] “Selling Guide: B3-5.1-01, General Requirements for Credit Scores,” Fannie Mae, September 2021, https://selling-guide.fanniemae.com/Selling-Guide/Origination-thru-Closing/Subpart-B3-Underwriting-Borrowers/Chapter-B3-5-Credit-Assessment/Section-B3-5-1-Credit-Scores/1032996841/B3-5-1-01-General-Requirements-for-Credit-Scores-08-05-2020.htm.
[3] “There’s More to Mortgages than Credit Scores,” Fannie Mae, February 2020, https://singlefamily.fanniemae.com/media/8511/display.
[4] “Credit Score Request for Input,” FHFA Division of Housing Mission and Goals, December 20, 2017, https://www.fhfa.gov/Media/PublicAffairs/PublicAffairsDocuments/CreditScore_RFI-2017.pdf.
[5] “FHFA Issues Proposed Rule on Validation and Approval of Credit Score Models,” Federal Housing Finance Agency, December 13, 2018, https://www.fhfa.gov/Media/PublicAffairs/Pages/FHFA-Issues-Proposed-Rule-on-Validation-and-Approval-of-Credit-Score-Models.aspx.
[6] “Validation and Approval of Credit Score Models,” Federal Housing Finance Agency, August 13, 2019, https://www.fhfa.gov/SupervisionRegulation/Rules/RuleDocuments/8-7-19%20Validation%20Approval%20Credit%20Score%20Models%20Final%20Rule_to%20Fed%20Reg%20for%20Web.pdf.
[8] Karan Kaul, “Six Things That Might Surprise You About Alternative Credit Scores,” Urban Institute, April 13, 2015, https://www.urban.org/.
[9] Michael A. Turner et al., “Give Credit Where Credit Is Due,” Brookings Institution, June 2016, https://www.brookings.edu/wp-content/uploads/2016/06/20061218_givecredit.pdf.
[12] Melvin L. Watt, “Prepared Remarks of Melvin L. Watt, Director of FHFA at the National Association of Real Estate Brokers’ 70th Annual Convention,” Federal Housing Finance Agency, August 1, 2017, https://www.fhfa.gov/Media/PublicAffairs/Pages/Prepared-Remarks-of-Melvin-L-Watt-Director-of-FHFA-at-the-NAREB-70th-Annual-Convention.aspx.
[13] Paul Weinstein Jr., “No Company Should Have a Monopoly on Credit Scoring,” The Hill, December 7, 2017, https://thehill.com/opinion/finance/363755-no-company-should-have-a-monopoly-on-credit-scoring.
[14] “Updated Credit Scoring and the Mortgage Market,” Progressive Policy Institute, December 4, 2017, https://www.progressivepolicy.org/event/updated-credit-scoring-mortgage-market/.
[16] “Validation and Approval of Credit Score Models: Final Rule,” Federal Register, August 16, 2019, https://www.federalregister.gov/documents/2019/08/16/2019-17633/validation-and-approval-of-credit-score-models.
[17] Karan Kaul and Laurie Goodman, “The FHFA’s Evaluation of Credit Scores Misses the Mark,” Urban Institute, March 2018, https://www.urban.org/sites/default/files/publication/97086/the_fhfas_evaluation_of_credit_scores_misses_the_mark.pdf.
[18] Kelly Thompson Cochran, Michael Stegman, and Colin Foos, “Utility, Telecommunications, and Rental Data in Underwriting Credit,” Urban Institute, December 2021, https://www.urban.org/research/publication/utility-telecommunications-and-rental-data-underwriting-credit/view/full_report.
[19] Laurie Goodman, “In Need of an Update: Credit Scoring in the Mortgage Market,” Urban Institute, July 2017, https://www.urban.org/sites/default/files/publication/92301/in-need-of-an-update-credit-scoring-in-the-mortgage-market_2.pdf.
[21] “Statement from FHFA Director Mark Calabria on Coronavirus,” Federal Housing Finance Agency, March 10, 2020, https://www.fhfa.gov/Media/PublicAffairs/Pages/Statement-from-FHFA-Director-Mark-Calabria-on-Coronavirus.aspx.
[22] “Adverse Market Refinance Fee Implementation Now December 1,” Federal Housing Finance Agency, August 25, 2020, https://www.fhfa.gov/Media/PublicAffairs/Pages/Adverse-Market-Refinance-Fee-Implementation-Now-December-1.aspx.
[23] “FHFA Eliminates Adverse Market Refinance Fee,” Federal Housing Finance Agency, July 16, 2021, https://www.fhfa.gov/Media/PublicAffairs/Pages/FHFA-Eliminates-Adverse-Market-Refinance-Fee.aspx.
[25] Laurie Goodman et al., “The Mortgage Servicing Collaborative,” Urban Institute, January 2018, https://www.urban.org/sites/default/files/publication/95666/the-mortgage-servicing-collaborative_1.pdf.
[27] Pete Sepp and Thomas Aiello, “Risky Road: Assessing the Costs of Alternative Credit Scoring,” National Taxpayers Union, March 22, 2019, https://www.ntu.org/publications/detail/risky-road-assessing-the-costs-of-alternative-credit-scoring.
Join us to discuss the Great Transatlantic Data Disruption, how it will affect the U.S. and European economies, and what can be done.
Just this month, the use of Google Analytics on an Austrian website was ruled illegal by local regulators because personal data was being transferred to the U.S.
That’s just the beginning, as European regulations and court decisions make cross-border data flows between the U.S. and Europe increasingly difficult. To hear how the Great Transatlantic Data Disruption will unfold, join our panel of experts:
WEBINAR: The Great Transatlantic Data Disruption: How Can It Be Avoided?
WHEN: Wednesday, January 19, 2022; 9:30-10:30am ET
PANELISTS:
Michael Mandel,Vice President and Chief Economist at the Progressive Policy Institute
Kristian Stout,Director of Innovation Policy at the International Center for Law & Economics
Hosuk Lee-Makiyama,Director of the European Centre for International Political Economy (ECIPE)
Today, the Innovation Frontier Project (IFP), a project of the Progressive Policy Institute, hosted a virtual conference for policymakers, staffers and journalists titled “How Better Statistics Lead to Better Policy in a Changing World.” The Innovation Frontier Project assembled a panel of leading experts who addressed the need for new statistics in the key areas of the digital economy; healthcare; and supply chains. They showed how a relatively small investment in improving our data can avoid huge policy mistakes.
For the firms that adopt them, artificial intelligence (AI) systems can offer revolutionary new products, increase productivity, raise wages, and expand consumer convenience.[1] But there are open questions about how well the ecosystem of small and medium-sized enterprises (SMEs) across the United States is prepared to adopt these new technologies. While AI systems offer some hope of narrowing the recent productivity gap between small and large firms, that can only happen if the technologies actually diffuse throughout the economy.
While some large firms in the U.S. are on the cutting edge of global AI adoption, the challenge for policymakers now is to help these technologies diffuse across the rest of the economy. To realize the full productivity potential of the U.S., AI tools need to be available to 89% of U.S. firms that have fewer than 20 employees and the 98% that have fewer than 100.[2] An AI-enabled productivity boost would be particularly timely as SMEs are recovering from the effects of the ongoing COVID-19 crisis.
The report discusses the promise for AI systems to increase productivity among U.S. SMEs, the current barriers to AI uptake, and policy tools that may be useful in managing the risks of AI while maximizing the benefits. In short: there is a wide range of policy levers that the U.S. can use to proactively provide the underlying digital and data infrastructure that will make it easier for SMEs to take the leap in adopting AI tools. Much of this infrastructure operates as a type of public good that will likely be underprovided by the market without public support.
Benefits of AI adoption:
The central case for AI adoption is that human cognition is limited in a variety of ways, most notably in time and processing power. Software tools can improve decision-making by increasing the speed and consistency with which decisions can be made, while also allowing more decisions to be planned out ahead of time in the event of various contingencies. Under this broad framework, we can think about “AI” as being a broad suite of technologies that are designed to automate or augment aspects of human decision-making.
While many of AI’s most eye-catching use cases will likely remain the preserve of large platforms, the technology also holds tremendous promise for SMEs. The adoption of third-party AI systems will notably enable SMEs to streamline mundane (but often costly) tasks such as marketing, customer relationship management, pre- and post-sales discussions with consumers, and Search Engine Optimization (SEO). These systems can provide a lifeline for SMEs who are overwhelmed by the many challenges of running a business, and they can expand the number of businesses that are eligible for certain financial supports. For example, AI tools can be used to improve the accuracy of credit risk underwriting models and using alternative data sources and a streamlined process, they can make it easier for SMEs to take out loans they otherwise might not qualify for under traditional methods. Along similar lines, research shows that AI-driven robotics have (and will continue) to boost the productivity of SMEs in the manufacturing industry.
Importantly, this upcoming wave of AI technology can help SMEs catch up with larger, international firms because it can democratize the benefits of large information technology (IT) investments that superstar firms have been seeing over the last decade.
The economist James Bessen has argued that the top 5% of firms in many industries have been increasingly pulling away from the rest of the field because they’ve made large investments in proprietary IT systems. Their smaller rivals struggle to develop their own systems because they lack the necessary scale to hire a large stable of in-house technical talent. Amazon, for example, has a team of 10,000 employees working to improve their Alexa and Echo systems.
While AI tools can’t fully reverse this trend, they can help shrink the gap when embedded into Software as a Service (SaaS) platforms that smaller firms can make use of without the same level of investment. Essentially, through general-purpose AI tools, SMEs can have access to a host of productivity enhancements that these proprietary IT systems offer, but at a price point that is economical for SMEs. By shrinking this productivity gap, smaller firms can begin to compete in earnest while differentiating from large firms through improved customer service and greater product diversity. This will give a large leg up to SMEs who adopt these AI systems and help them better compete with large global incumbent firms.
Consider a firm like Keelvar Systems, which uses advanced sourcing automation to help businesses rapidly shift supply chains around the globe in the event of disruptions or delays. Essentially, it replaces or augments the work that a large supply chain and sourcing office would do within a firm. By using their service, or others like it, SMEs have the ability to benefit from similar levels of sophistication in their supply chain management without having employees spend hundreds of hours on tedious tasks or maintaining expensive proprietary IT systems.
There are firms like Legal Robot that have created a series of tools to help small businesses access legal services that would otherwise require a small army of in-house lawyers. With their service, SMEs can use smart contract templates based on their industry, receive instant contract analysis to make sure they are receiving fair terms and can automate certain aspects of compliance with laws like the GDPR.
Likewise, companies like Bold360 have helped SMEs improve their customer service experiences by offering a variety of AI-powered-chatbots and tools. Many basic customer concerns about products or delivery can be handled by these basic chatbots, freeing up human customer representatives to focus their time on the hard or advanced cases. Again, the pattern here is there is a service that large, multinational companies have been investing billions of dollars to create proprietary versions of, and now the customizability of AI is helping this service become more accessible to SMEs.
What are the barriers to AI adoption for SMEs in the U.S. and what can policymakers do to help create a welcoming environment?
Data investment as a public good
Depending on the context, data can often have the same traits as other public goods. First, it is non-rival—the marginal cost of producing a new copy of a piece of data is zero. Stated differently, multiple individuals can use the same dataset at almost no additional cost. The second important trait is that data is hard to exclude. Consider this report. Once it has been posted online, it is difficult to prevent people from accessing and sharing it as they see fit. This is one of the reasons why copyright infringement is so hard to stamp out.
Oversimplifying, these two features can lead to two opposite problems. On the one hand, economic agents might underinvest in public goods, absent government-created appropriability mechanisms (such as patent and copyright protection). Conversely, public goods tend to be underutilized (at least from a static point of view). Any price that enables economic agents to recoup their investments in a public good will be above the good’s “socially optimal” marginal cost of zero. Public good policies thus involve a tradeoff between incentives to create and incentives to disseminate. For example, patents give inventors the exclusive right to make, use and sell their invention; but inventors must disclose their inventions, and these fall into the public domain after twenty years.
What does this mean for data and artificial intelligence? If policymakers think that data is an essential input for cutting-edge AI, then they should question whether obstacles currently prevent firms from investing in data generation or disseminating their data.
While policies in this space involve significant tradeoffs, some offer much higher returns to social welfare. For instance, to the extent policymakers believe existing datasets are being underutilized, purchasing private entities’ data (through voluntary exchanges) and placing it in public data trusts would be a better policy than imposing data sharing obligations (which could undermine firms incentive to produce data in the first place). This is akin to the idea of government patent buyouts.
Of particular interest for policymakers, however, is the fact that some SMEs are sitting on top of data flows that are not being fully utilized because it is expensive to make data usable and these datasets may not be very valuable in isolation. As an example, industry-level manufacturing data might be quite valuable to all firms in a sector, but the dataflows from one SME are much less valuable. The U.S. could align incentives by providing investment funds to quantify various aspects of business flows and then submit them to public data trusts, which could be accessible for use by all firms in the industry. This would essentially be treating valuable dataflows as a type of public infrastructure that needs government investment to be fully realized.
This kind of public investment can happen not only through incentives for private firms but through the public sector as well. Governments at all levels (state, local, and national) have valuable dataflows regarding infrastructure development, the organization of public transportation, and general macro-level economic data that can be turned into open datasets for public and commercial use. Particularly on the national level, the U.S. should consider investment in IT infrastructure that can coordinate the submission of open datasets on the state and local level.
Indeed, if key scientific or commercial datasets do not yet exist, the public sector may be best positioned to create them in the first place as a type of digital infrastructure provision. One notable structure that may help in this regard is the idea of a Focused Research Organization, which would provide a team of researchers with an ambitious budget and a nimble organizational structure with the specific goal of creating new public datasets or toolkits over a set time period.
Provide regulatory certainty
For SMEs deciding whether to invest in adopting AI tools, regulatory and compliance costs can be a significant deterrent. Policymakers should recognize that regulation is often more burdensome for small firms that generally have less ability to shoulder compliance costs. Especially in industries with low marginal costs, such as the tech sector, larger firms can spread fixed compliance costs across more consumers, giving them a competitive edge over smaller rivals. Regulation can thus act as a powerful barrier to entry. For instance, a study found that the European experiment with GDPR led to a 17% increase in industry concentration among technology vendors that provide support services to websites.
This is not to say that additional regulation is, or is not, necessary in the first place. Indeed, there are a host of malicious or unintentional harms that can occur from improperly calibrated AI systems. Regulation can be a powerful tool to prevent these harms and, when well-balanced, can promote greater trust in the overall ecosystem. But potential regulation should follow sound policymaking principles that reduce the regulatory burden imposed on firms, notably by making regulation easy to understand, risk based, and low-cost to comply with.
In the U.S. there is to date no overriding national AI regulation. Instead, each sectoral regulator (i.e. Federal Aviation Administration, Security and Exchange Commission, Federal Trade Commission, etc.) has been steadily increasing their oversight over the use of algorithms and software in their specific area. This is likely an appropriate approach, as the kinds of risks and tradeoffs at play are going to be very different in healthcare or financial decision-making when compared to consumer applications. As this approach develops, it would be prudent to develop a risk-based framework that allows for more scrutiny of algorithmic decision-making in sensitive areas while giving SMEs confidence to invest in low-risk areas with the knowledge they will not later take on large compliance costs.
However, regulation over data protection has been far more segmented and piecemeal. And the state-by-state patchwork of rules that has developed can be a significant deterrent for SMEs when considering whether to invest in the use of certain AI tools. Policymakers should consider an overriding national privacy law that would be able to set standard rules of the road over the protection of data in all 50 states so that U.S. SMEs can invest with confidence.
Finally, U.S. policymakers should consider aggregating all this information through the creation of a dedicated AI regulatory website that provides a toolkit of resources for SMEs about the benefits of AI adoption for their business, the potential obligations and roadblocks that they need to be aware of, and best practices for cybersecurity hygiene and data sharing.
Expand the AI talent pool
A lack of skilled talent is one of the biggest barriers to AI adoption as the technical skills required to build or adapt AI models are in short supply. In the U.S., especially, smaller companies struggle to compete with the high salaries paid out by large tech firms for top-end machine learning engineers and data scientists.
In broad strokes, this skills shortage can be alleviated in two ways: through upskilling the domestic population and by improving immigration pathways for global talent.
To upskill the domestic population, one relatively simple lever would be to pay some portion of the costs of individuals and businesses who wish to upskill. In the U.S., a portion of a worker’s retraining costs may be written off as a business expense so long as the worker is having their productivity improved in a role they currently occupy. But this expense is not tax deductible if the proposed training would enable them to take on a new role or trade.
For example, if a small manufacturing firm has technically competent IT staff who wish to attend a specialized training course on using machine vision systems in a warehouse environment, this expense would not currently be deductible as it would enable them to take on a new role within the company. This inadvertently creates an incentive to spend more on capital productivity investments than labor productivity investments. Addressing this imbalance would incentivize more firms to invest in worker retraining and help speed the creation of an AI workforce in the U.S.
Secondly, the U.S. needs to urgently address the shortcomings in the U.S. immigration system which make it more difficult for startups to compete with large incumbents on the basis of talent. Approximately 79% of the graduate students in computer science (and related subfields) studying in the U.S. are international students, which means a large majority of potential AI workers U.S. firms may look to recruit must operate through the immigration system. The cost, complexity, and length of this process inevitably favors large, incumbent firms who can afford to navigate the regulatory maze of procuring an H-1B or related work visa.
A recent NBER paper showed in detail the myriad ways in which access to international talent is important for startup success. Utilizing the random nature of the H-1B lottery system, the paper compared startups that randomly received a higher percentage of their visa applications approved to those who did not. The random nature of the H-1B lottery makes an ideal policy experiment because it allows for a clean test in which other potentially confounding variables are controlled for. The study found that a one standard deviation increase in the likelihood of successfully sponsoring an H-1B visa correlated with a 10% increase in the likelihood of receiving external funding, a 20% increase in the likelihood of a successful exit, a 23% increase in successful Initial Public Offering, and a 4.8% increase in the number of patents filed by the startup.
Policymakers could begin to counter this effect by waiving immigration fees for firms of a certain size and by streamlining the application process.
Further, policymakers should look to create a statutory startup visa so that international entrepreneurs have a viable pathway into the U.S. to launch firms of their own. According to research by Michael Roacha and John Skrentny, international STEM PhD students are just as likely to report wanting to work for or launch their own firm as native-born students, but the difficulty of our immigration system pushes them towards working at large incumbent firms.
Using these two levers of upskilling and immigration reform, the U.S. should increase the supply of AI talent available to SMEs or to launch SMEs themselves and thereby spur the adoption of AI adoption.
Conclusion
Artificial intelligence systems hold great potential to streamline the costs of doing business in a modern economy, particularly for SMEs. The last 20 years of the information technology revolution have helped large, established firms reach the cutting edge of productivity while smaller firms have been left behind. But general-purpose AI tools now provide an opportunity for SMEs to take advantage of many of these IT advancements at a cost and a scale that is feasible for them. Policymakers should attempt to proactively build out the digital infrastructure that will make it easier for SMEs to take the leap in adapting AI tools.
Summary of policy recommendations:
Data investment as a public good:
Where appropriate, align incentives for the private sector to contribute industry-level SME data to public and private data trusts that could be used by everyone.
Invest in making more government datasets open to the public.
Fund Focused Research Organizations or similar groups with the explicit goal of creating new scientific and commercial public datasets.
Provide regulatory certainty:
Clarify existing regulations and the obligations that SMEs must meet when utilizing a new AI tool.
Encourage the development of a risk-based framework that allows for more stringent regulation of sensitive applications while giving certainty to SMEs on investment in low-risk applications.
Pass an overriding national privacy law so that SMEs aren’t deterred from investing by a patchwork of differing state-by-state laws.
Consider the creation of a new SME regulatory website that provides informational resources to SMEs about the benefits of AI adoption for their business and the potential roadblocks that they need to be aware of.
Expand the AI talent pool
Encourage upskilling of the U.S. population by making worker retraining deductible as a business expense.
Reevaluate U.S. immigration pathways to make them more attractive for international technical talent.
Streamline the immigration application process and waive fees for firms below a certain size to make it easier for SMEs to compete for technical talent.
[1] This report is an adaptation of an earlier paper coauthored with Dirk Auer titled “Encouraging AI Adoption in the EU”.
[2] Annual Survey of Entrepreneurs – Characteristics of Businesses: 2016 Tables, United States Census Bureau
It’s rare when a single acquisition can offer insight into two different important questions in innovation. But the proposed purchase of cancer-diagnostic developer Grail — a startup with tremendous potential — by gene-sequencing leader Illumina is just that pivotal. First, is it pro-innovation for European antitrust regulators to have the power to block a deal involving two American biotech companies that do no substantial business in Europe? We argue that such “regulatory imperialism” by the EU has the potential to slow down biotech innovation, especially given the region’s generally lagging performance in biotech (BioNTech notwithstanding).
Second, under what conditions is vertical integration a socially beneficial strategy for accelerating innovation? Successful innovation in the biosciences often combines risk-taking by small companies with the development and regulatory resources of larger companies. We conclude that excessive antitrust focus on blocking vertical integration in the biosciences could impede the development of important new products and treatments.
These issues go far beyond Illumina and Grail. But it’s helpful to have the facts about this particular case. Grail has spent the past five years developing a diagnostic capable of screening for 50 different cancers at once — a test set to launch this year — while Illumina makes the hardware that performs those tests. Illumina offered to buy Grail, with the idea of integrating Grail’s technology with its own, to simplify the process of using gene sequencing for clinical diagnostics on a massive scale. If successful, this would dramatically reduce the cost of performing cancer screenings.
The Federal Trade Commission (FTC) intervened to block the acquisition, worried that Illumina would block potential competitors of Grail from using its gene sequencers. Illumina promised to supply these competitors with gene sequencing equipment and supplies without price increases. The FTC, through a complicated series of maneuvers that are not relevant to this paper, temporarily pulled back from its intervention to allow the European Commission to take the first swing at blocking the acquisition. The EU antitrust regulators are planning to rule by July 27 on whether to clear the merger.
And here’s where we come to the first issue: Should the EU antitrust regulators be considering a biotech deal that by the ordinary rules would not come under their jurisdiction? As the Wall Street Journal notes, “Since the merger doesn’t qualify for antitrust review under the bylaws of the European Union or any member states, the Commission asked countries to invoke Article 22 of the EU’s Merger Regulations. This rarely used provision allows countries to refer transactions to the Commission when their governments lack jurisdiction.”
This fits the general EU strategy of “regulatory imperialism.” Rather than focusing on innovation, the EU has tried to position itself as the global leader in regulation in a variety of areas, from artificial intelligence to chemicals to GMOs to data privacy. The European approach to regulation has been framed by the precautionary principle, which puts less weight on the benefits of innovation and more on the potential harms.
That risk-avoiding approach is one important reason why Europe has consistently lagged in biotech. European biotech is not nonexistent — after all, Pfizer partnered with a German biotech firm, BioNTech, to develop a very successful COVID-19 vaccine. Nevertheless, data from the Organisation for Economic Co-operation and Development shows that business spending on biotech research and development (R&D) in the EU comes to roughly one-third that of the U.S.
Tacitly accepting European jurisdiction over American biotech deals has the potential to slow down commercialization of important technologies. According to the New York Times, Europe has been “a world leader in technology regulation, including privacy and antitrust.” In a recent speech, Emmanuel Macron said that during its turn at the helm of the EU presidency, France would “try to deliver a maximum of regulation and progress.” When the EU sets the global standard on regulation and companies choose to comply with it everywhere (even where standards are lower), that’s known as the “Brussels effect.”
First, on privacy, the General Data Protection Regulation (GDPR) has become a de facto floor on policy for many large multinational companies. The problem for companies — especially in biotech and software — is that there are very high fixed costs to product development (and low marginal costs for distribution), and reworking a product for a different regulatory environment is often more trouble than it’s worth. That leads to a race to the top (or bottom, depending on your perspective) in terms of regulation.
In its first few years in effect, GDPR’s flaws have become manifest and EU policymakers are starting to consider reforms to the law. According to a recent joint report from three academy networks, “GDPR rules have stalled or derailed at least 40 cancer studies funded by the US National Institutes of Health (NIH).” The authors go on to note that “5,000 international health projects were affected by GDPR requirements in 2019 alone.” This flawed model for privacy regulation has unfortunately been exported around the globe.
Second, mergers between globally competitive firms with a presence in multiple jurisdictions have to get clearance from multiple antitrust enforcement agencies. If a single agency in a large market objects to the merger, the deal might fall apart completely. For example, a merger between U.S.-based Honeywell and U.S.-based General Electric collapsed after the EU competition enforcement agency decided to block the deal out of concern it would create a monopoly in jet engines. Of course, the EU’s investigation of the Illumina-Grail merger takes that one step further, given the fact that Grail doesn’t conduct any business in the EU, and Illumina’s business there isn’t substantial, with revenues below the usual threshold for antitrust scrutiny for both the European Commission and individual countries.
The next important question raised by the Illumina-Grail purchase is the role of vertical integration. We start with the simple observation that innovating in complex systems is both risky and expensive. That’s true in frontier industries such as electric vehicles and e-commerce, and it’s especially true in the biosciences, with the high hurdle set by the need for safety and efficacy.
The cost to bring a drug to market is a huge barrier for startups to remain independent. A 2020 paper in JAMA examining 63 of the 355 new therapeutic drugs and biologic agents approved by the U.S. Food and Drug Administration between 2009 and 2018 found that the median capitalized research and development cost per medicine was $985 million. Other studies using private data have found even higher figures. A 2019 study published in the Journal of Health Economics estimated the average cost to reach approval at $2.6 billion (post-approval R&D costs nudge the total up to $2.9 billion).
Should these complex systems be built by one company, which is better able to integrate all the pieces of the puzzle? (Tesla comes to mind when we are discussing electric vehicles). Or is it better to distribute the risk over multiple companies? The biotech industry has mostly followed this second strategy. Risky R&D is done by small firms with financing by high-risk capital such as venture firms. Then the resulting product, if successfully passing clinical trials, is acquired by a larger firm for commercialization.
In some cases, both strategies are important. The initial stages of research and development of a new idea are farmed out to a smaller company and financed by risk capital. And then when it comes time to build the idea into a complex system, the actual integration is done by a larger company, which has an established distribution network and marketing resources for reaching patients in a targeted fashion. This can greatly accelerate the development process.
The question, then, is whether this integration would be easier within one company or at arms-length. Illumina has made an offer to buy Grail, which was originally spun off from Illumina in order to get funding from risk capital. The goal, obviously, is to accelerate the development of this game changing integration.
The FTC has objected to the acquisition, because the agency worries about Illumina prioritizing its internal customer over other potential cancer diagnostics systems. Certainly, it’s true that some vertical mergers are anti-competitive. “Killer acquisitions” are one type of merger in biotech that is anti-competitive in nature. A recent paper from Ederer, Cunningham and Ma found that between 5% and 7% of acquisitions in the pharmaceutical industry are killer acquisitions, meaning the incumbent firm purchased the startup with the intention of shutting down one or more of its products, because the legacy company offers a competing product that is more profitable.
There is increasing agreement among regulators on both sides of the Atlantic that acquisitions — especially in the pharmaceutical sector — need to be scrutinized more closely if products have the potential to be killed off post-acquisition. One heuristic a regulator might use is to look at how much overlap there is between the acquired product and the incumbent, especially in terms of benefits and use cases. If the incumbent’s product is still on patent, then there is a significant incentive to acquire a competitive product that might be disruptive to an acquirer’s portfolio and shut down the new product.
But there’s little evidence that most vertical acquisitions are anti-competitive. Vertical mergers — or the combination of two companies at different layers of the supply chain — are less likely than horizontal mergers — acquisition of a direct competitor — to be anticompetitive as both economic theory and empirical evidence show. Regarding the theory, firms are engaged in “make or buy” decisions all the time. If they choose to produce an input in-house instead of buying it from the market, then they have vertically integrated (either by developing the capacity on their own or by acquiring another firm with that capacity). Prohibiting firms from vertically integrating via acquisition would forgo some of the benefits of economies of scope and economies of scale. A literature review by Lafontaine and Slade showed that vertical mergers were procompetitive on average.
One of the most common reasons vertical mergers are less suspect than horizontal mergers has to do with “double marginalization.” If you assume two products are monopolies in their respective markets, then the producers of those products will each charge the monopoly price, which is higher than socially optimal. If the two products are complementary, then the companies can merge and create a positive sum scenario by lowering prices. Lower prices reduce deadweight loss, which is good for consumers, and lead to higher profits for the combined firm.
We note that if the FTC ruling stands, it will mean that developers of complex integrated systems will choose to keep their technologies in house rather than spinning them out and run the risk of having an acquisition blocked. And innovative development will be slowed rather than accelerated.
For the firms that adopt them, artificial intelligence (AI) systems can offer revolutionary new products, increase productivity, raise wages, and expand consumer convenience. But there are open questions about how well the ecosystem of small and medium sized enterprises (SMEs) across Europe is prepared to adopt these new technologies. While AI systems offer some hope of narrowing the recent productivity gap between small and large firms, that can only happen if the technologies actually diffuse throughout the economy.
Policymakers have naturally been attracted to this topic as SMEs represent the backbone of the European economy, making up 99% of all businesses. And an AI-enabled productivity boost would be particularly timely as SMEs are recovering from the effects of the ongoing Covid-19 crisis.
At the same time, the EU has articulated a desire to be on the forefront of developing novel AI regulations. The EU is contemplating new regulations on the development and deployment of AI that seek to address dual priorities2: How can the EU simultaneously increase the uptake of AI by European firms while shaping the regulatory environment to protect European consumers from harm?
And while the EU’s ambition is laudable, the Commission’s pronouncements have so far failed to grapple meaningfully with the significant tradeoffs that the regulation of new technologies entails. As is the case with all new technologies, the adoption of AI systems — i.e., the broad suite of technologies that are designed to automate or augment aspects of human decision making — involves a tradeoff between risk-mitigation and rapid adoption. Unless carefully managed, the effort to protect consumers from potential risks places additional burdens on firms, which can chill investment and adoption, especially among SMEs. Policymakers thus need to achieve a balance between these two objectives.
With this in mind, our report outlines various policy considerations that should enable policymakers to achieve this balance between the two goals and embody the principle of Thinking Small First3 — the idea that public policy should consider the potential impacts on SMEs from the ground up. The report discusses the promise for AI systems to increase productivity among EU SMEs, the current barriers to AI uptake, and policy tools that may be useful in managing the risks of AI while maximizing the benefits.
Barring some 11th hour drama in the House, President Biden is expected to sign his $1.8 trillion American Rescue Plan into law this week. It’s a landmark achievement that gives us reason to hope our government may not be broken after all.
Although he’s only been in office 46 days, Biden already has done more to lift the nation’s morale and make the economy work for everyone than his predecessor managed in four turbulent years. In case we’ve forgotten, this is what a real president looks like.
Biden’s plan focuses intently on defeating the coronavirus pandemic that has frozen normal life for a full year. It provides ample money to ramp up vaccinations, enable schools to reopen, help people who have lost their jobs and businesses, keep state and local governments running – all of which will speed economic recovery.
In shaping and steering the package through Congress, Biden has drawn on a deep reservoir of political experience and cordial relationships. He also has been abetted by qualified and competent White House staff (another contrast with the man he replaced). He has radiated calm and showed impressive discipline in ignoring political distractions and media sideshows to deliver swiftly on his core campaign promise.
The record will show the relief bill passed with almost zero votes from Republicans. But it will also show that Biden got the job done without vilifying his opponents or deepening the country’s paralyzing cultural rifts.
Plenty of pragmatic progressives – myself included – have misgivings about parts of the bill. Its cash payments are not well-targeted, and $350 billion appears to be more than state and local governments actually need. Those dollars would be better spent on science and technology, high skills for non-college workers, clean energy infrastructure and other essential public investments. Amid $5-6 trillion deficits and cascading public debt, we could face some difficult fiscal adjustments in the years ahead.
On the other hand, the Biden package is deeply progressive. It throws lifelines to vulnerable Americans who have borne the brunt of the virus and the Covid recession: the old, low-income workers, poor and minority communities with severe health challenges and hungry families. Through an expanded child tax credit, the bill also would create the equivalent of a child allowance that is expected to cut child poverty in half.
Policy disagreements aside, Biden correctly gauged the magnitude of the nation’s health and economic emergency. After a long, grinding year of loss, suffering and social isolation, his instinct to go big is right. So is his desire to cultivate national “unity” and reach out to reasonable Republicans, who are beset by extremists in their party.
This is what governing in a Constitutional democracy is supposed to look like. The public seems to approve, even if Biden’s left-wing detractors don’t. The most recent AP poll shows the president’s approval rating hitting 60 percent.
By clearing his first big hurdle, Biden has dealt himself a strong political hand for the next one: Winning passage of his coming “Build Back Better” plan for building a more just, clean and resilient U.S. economy.
President Biden has set the ambitious, important climate goal of achieving net zero emissions from the nation’s electric power sector by 2035. Already, natural gas has played a key role in lowering U.S. carbon dioxide emissions in the past 15 years, in part by displacing higher emitting coal. But gas, which still provides more than a third of America’s electricity, must play an even greater part in America’s decarbonization plans going forward.
Right now, gas uniquely supports the expansion of renewable energy by providing an instantly dispatchable source of electricity. Unlike coal and nuclear plants, natural gas power plants turn on and off within minutes, allowing the grid to quickly match supply and demand even when the wind isn’t blowing and the sun isn’t shining. As a U.S. National Renewable Energy Laboratory report has noted, this unique flexibility of natural gas generation thereby facilitates the steady expansion of renewables.
Yet as we move toward decarbonization, maintaining an affordable and reliable grid is becoming more exacting, due to increased frequency of extreme weather events and the rapid growth of intermittent and variable wind and solar power. Retaining sufficient natural gas generation to backstop wind and solar power will reduce costs and increase reliability compared to a grid that relies entirely on renewables, or often more expensive electricity storage. Given these realities, demands to ban shale gas development and fracking are not consistent with an economically balanced approach to decarbonizing the electric grid, as President Biden and other administration officials have repeatedly noted.
Michigan and Georgia state legislators are considering legislation that would expand access to telehealth services for contact lens and eyeglasses prescription renewals. While a seemingly small change, it would make it easier for consumers to get new glasses and contacts and help push the states toward more innovative health care more broadly. This week I had the opportunity to testify to both state legislatures why I agree with these proposed changes.
Under current law, both Michigan and Georgia treat ocular health differently than other types of health care. Patients can see physicians remotely to renew drug prescriptions but not eyeglass or contact lens prescriptions. The states legislatively limited access to telehealth over safety concerns rather than letting the governing boards of medicine decide where a person could receive ocular health care.
In recent years, renewing contact lens and eye glass prescriptions has become commonplace is many states. After an initial prescription is provided with an in-person exam, certain low-risk contact lens wearers can use home computers and mobile phones to check their vision and take a picture of their eye to renew prescriptions for up to five years. The information is sent to a local ophthalmologist, who reviews the results and issues a prescription renewal if appropriate.
But this type of renewal is banned in Michigan and Georgia. The good news is, the state legislatures are considering HB 4356 and HB 629, innovative bills which would roll back these limits and allow the residents of Michigan and Georgia, respectively, to use telehealth to renew lens prescriptions.
While telehealth will never be a panacea of all of health care, it does have the potential to increase access and reduce costs. But using state law to unnecessarily blocking access to certain telehealth services is just one (of many) reasons why health care costs too much in the United States. Here’s a technology that allows people to avoid unnecessary in-person visits, and yet it’s banned from being used for basic lens prescription renewals. And as we’ve seen from the Covid-19 pandemic, telehealth can BOTH improve access and reduce costs when used appropriately.
During Covid-19, it’s been laid bare how some parts of the health care system maintain barriers to access solely for revenue purposes. To reduce costs and improve access, we need to make it easier to access needed care – whether or not we are in a pandemic.
Michigan and Georgia should vote to approve these bills to make it easier for their constituents to get their eyeglass and contact lens prescriptions.
Representative Ron Kind of Wisconsin’s 3rd District joins the PPI Podcast this week, offering the perspective of a Democrat in a district twice-won by Donald Trump. Kind discusses the work of the New Dem Coalition in the first few weeks of the Biden administration, the impact of Trump’s trade war on farmers, and the need for Democrats to step up in rural areas.
President Biden’s upcoming address to Congress is an opportunity to speak directly to the more than 10 million Americans who find themselves out of a job because of the pandemic recession. On the question of how to help these workers, Biden need look no further than the Build Back Better platform he campaigned on. A key element of the BBB platform is a $50 billion investment in workforce development, including apprenticeships.
Americans, especially young adults, need more pathways to careers that don’t require a traditional four-year college degree. While Millennials are the most educated generation in history, as of 2015, only about a third of Americans ages 25 to 34 were college graduates. That number is even lower for older Americans. Most people don’t go to college, and apprenticeships are an underappreciated way for finding jobs for the millions of job seekers who will have to find work after the pandemic, including those whose pre-Covid jobs might never come back. Compared to other high-income countries, the U.S. lags significantly when it comes to apprenticeships and other “active labor market” policies and it’s time for us to make investments to fill this gap.
Recently, the White House announced several ways that the Biden administration is strengthening registered apprenticeships across the country.
President Biden has endorsed Congressman Bobby Scott’s bipartisan National Apprenticeship Act of 2021, which will “create and expand registered apprenticeships, youth apprenticeships and pre-apprenticeship programs.” This legislation had been passed in the House in November 2020, in the last Congress, but the Republican Senate Majority failed to take up the bill for a vote. With Democrats now in the majority, there is renewed hope that the country’s underfunded and outdated apprenticeship system can finally be modernized to meet our 21st-century workforce needs. The reauthorization of the National Apprenticeship Act is estimated to create nearly one million high-quality apprenticeship opportunities and includes provisions that target opportunities for key groups, such as young adults, childcare workers, and veterans. The bill also aims to increase apprenticeships in industries that do not require a four-year degree for well-paid jobs, such as healthcare, IT, and financial services. We’ve supported this bipartisan legislation in the past and we look forward to seeing it make its way through Congress.
Additionally, the White House has reversed a harmful Trump-era policy by rescinding the industry-recognized apprenticeship programs (IRAPs), which threatened to undermine registered apprenticeship programs across the country and weakened employer-protections for trainees.
These are important steps, but the White House and Congress should go even further to modernize the current apprenticeship system. First, they should formalize and incentivize intermediaries (public or private) who create “outsourced” apprenticeships programs that get paid for each placement when they hire candidates who meet certain criteria (such as eligibility for Pell grants), provide them with an apprenticeship that pays minimum wage or better, train them, and place them in permanent positions. Second, they should create relationships with high schools to set up apprenticeships and career and technical education programs that begin in the 11th or 12th grade and pair students with local employers. These have shown promise in other high-income countries that employ a high percentage of their younger workers through apprenticeships. And, lastly, they should create public service apprenticeship opportunities and programs at all levels of government, including in industries such as information technology, accounting, and healthcare.
As President Biden crafts his address to Congress in the coming weeks, we hope that he acknowledges that millions of Americans who are out of a job lack a college degree. For them, other pathways to jobs, such as through investing in apprenticeships, will be a critical step forward in regaining their economic footing.
Among the lesser reported elements of the Covid-19 relief bill making its way through Congress this month are several improvements to Medicaid to bolster health insurance coverage for low-income individuals. One specific provision would allow states to extend Medicaid coverage to women for up to a full year after giving birth. Newborns in the U.S. are currently covered for up to twelve months. We’ve supported this critical expansion in the past, citing evidence that the U.S. maternal mortality rate has shamefully risen to be the highest among high-income countries.
According to the Centers for Disease Control (CDC), for 2018, the maternal mortality rate was 17.4 per 100,000 live births in the United States. The rate of deaths for Black women is over twice that figure.
Under current law, Medicaid is only required to cover new mothers for 60 days postpartum, despite the fact that approximately 13 percent of maternal deaths occur six or more weeks after a woman gives birth and Medicaid covers over 40 percent of all births in our country. States that have expanded Medicaid under the Affordable Care Act (ACA) allow eligible women to stay on the program after childbirth. But roughly a dozen states have rejected to expand Medicaid and the one-year expansion for new moms would help women living in these states.
The expansion will help address a widespread societal inequity when it comes to access to health care. Low income and women of color are disproportionately more likely to die from childbirth and pregnancy-related complications. Yet, these deaths are not inevitable. A 2018 report found that over 60 percent of pregnancy-related deaths are preventable. A few years ago, California started collecting data on maternal deaths and reviewing the clinical failures that led to fatalities. As a result, the state was able to produce evidence-based checklists and training programs to help clinicians address two lethal conditions: high blood pressure and hemorrhage. Now, its maternal death rate is a quarter of the United States as a whole.
Pregnancy and postpartum are an incredibly vulnerable period in any woman’s life. We should be supporting new mothers, and one way is by giving them the health coverage necessary to navigate postpartum care and complications. We applaud Congress, including Rep. Robin Kelly (D-Ill.) who is among those spearheading this effort, for their action to address this key inequity in healthcare access for new mothers and we look forward to seeing it enacted along with Covid relief next month.
This week’s episode is a joint episode of the Neoliberal Podcast and the PPI Podcast, featuring guest host Colin Mortimer of PPI’s Center for New Liberalism. Colin sits down with Oregon State Treasurer Tobias Read to talk about the ways Oregon is optimizing its treasury to support and empower Oregonians. Colin and Treasurer Read discuss the day-to-day role of a State Treasurer, and how his team uses the state’s investment power to help citizens, as well as how behavioral ‘nudge’ programs can increase retirement savings.
Major electric vehicle announcements by President Joe Biden and General Motors are being hailed as a turning point in the transition to widespread EV production and deployment in America. This matters greatly, because this crucial technology can both jump-start U.S. manufacturing to ease the economic and jobs crisis, and rapidly reduce emissions that cause climate change.
But there are still serious barriers to EVs. By far the biggest is the lack of American consumer demand for electric vehicles.
The restaurant industry is hurting. Between February and April of last year, more than 6 million food service workers lost their jobs.1 As of December, more than 110,000 restaurants had closed permanently or long-term.2
The industry has some big chains, but most restaurants are quintessentially small businesses. More than 9 in 10 restaurants have fewer than 50 employees. More than 7 in 10 restaurants are single-unit operations.3 Restaurants also offer lots of entry-level jobs for less-skilled workers (almost one-half of workers got their first job experience in a restaurant).
There is almost no safe way to allow indoor dining during an outbreak of a lethal, airborne, and highly contagious virus. Customers must remove their masks to eat and restaurant dining is traditionally done indoors with tightly packed groups of people. Some restaurants have chosen to remain open by relying on pickup and delivery orders instead of indoor dining, and for certain kinds of food, like pizza, this is a natural extension of their previous business model. For others, it’s a difficult transition to figure out pricing and what types of food work for takeaway. Many restaurants rely on third-party services for aggregating online orders and for fulfilling the delivery to customers.
Delivery services have been one of the few sectors expanding during the pandemic, providing work for those who need it and helping many Americans stay safe during the pandemic. With the goal of helping restaurants, some states and cities have temporarily capped the commissions these platforms can charge restaurants for delivery. These price controls are popular with elected officials because they look like a cost-free way to help struggling restaurants, but their costs are hidden, not free, and will hit small restaurants and their workers hardest.
While well-intentioned, imposing price controls will slow the economic recovery in a sector that’s among the hardest hit by COVID. To understand why, it’s important to know how these platforms work. Food delivery services are multi-sided markets, meaning the platform owner is trying to connect multiple “sides” of the market in mutually beneficial exchange. In this case, the business is trying to connect three groups: drivers, restaurants, and consumers. The balance of fees, commissions, and prices on all three sides of this market is set to achieve a high volume of orders, meaning revenue for restaurants and earnings for delivery drivers. Price controls on one side of the market upset this delicate balance.
In general, most economists view price controls as an ineffective and inefficient means of achieving lower costs for underserved groups. In a classic example, rent control leads to underinvestment in construction and maintenance of housing. Landlords are incentivized to convert their apartments into condos or let friends and family live in the units. Under rent control, property owners often charge a large upfront payment to secure a lease. Economists are also skeptical of vaguely written price gouging laws or price controls on essential medical supplies during a public health emergency. A much better solution, many economists argue, is for the government to step in and pay the market rate (to encourage supply) and redistribute the goods based on need.
There is a narrow range of circumstances when price controls can be beneficial for social welfare. In static and monopolistic markets, price controls can make sense to prevent dominant incumbents from charging monopoly prices and harming consumers. A second exception to the rule is during a natural disaster or other emergency. If supply is extremely inelastic (meaning non-responsive to price changes) during a crisis, then price-gouging laws can be beneficial on net. But to be clear, these laws need to be precise and narrow in scope.
If the emergency lasts beyond a few days or weeks, then relaxing price controls might be necessary to encourage an increase in supply.
Neither of these exceptions applies to the food delivery market in this crisis. The market for food delivery services is highly competitive (aggregate profits in the industry are negative4) and the current public health emergency has already lasted for more than a year. Instead, we can expect price controls on food delivery to have the usual negative effect. And based on early data from the cities that have capped commissions, that’s exactly what’s happening.5 Companies are shifting the costs from restaurants to consumers in the form of higher fees, and because consumers are generally more sensitive to price increases, this is leading to a reduction in output in these markets.6 Fewer orders means less business for restaurants and less income for drivers.
There’s a better way forward. The federal government can provide (and has provided) direct bailouts of the businesses and their workers. Unemployed workers have received extended and bonus unemployment benefits. These benefits should be continued for the duration of the public health emergency. Restaurants should receive grants and loans so they can continue paying rent and other fixed costs while closed. These programs should be funded to the level that every restaurant can benefit from them. “Just give people and businesses cash” sounds simple (and expensive), but the alternatives are much worse. Providing no help to restaurants would force them to choose between closing permanently or staying open — thus exacerbating and prolonging the pandemic. Imposing price controls will likely lead to a reduction in output, harming consumers, drivers, and restaurants in the process. The answer is for the federal government to help bridge the gap to the end of the pandemic by continuing and increasing its support for workers and businesses.
INTRODUCTION: RESTAURANTS NEED HELP
The restaurant industry has been hit especially hard by the pandemic. COVID-19 is an airborne respiratory illness that spreads most easily when people are (1) indoors (2) unmasked (3) and close together for an extended period of time. Unfortunately, that description matches restaurants perfectly, which is why many states forced them to close indoor dining during various stages of the pandemic. It’s not the fault of restaurant owners or workers that they were unable to stay open, so policymakers have a duty to make them whole.
More than one in six restaurants have been forced to close permanently — about 110,000 establishments — according to data from the National Restaurant Association.7 Small local restaurants are doing much worse than large chains, which have the advantages of “more capital, more leverage on lease terms, more physical space, more geographic flexibility and prior expertise with drive-throughs, carryout and delivery,” according to the Wall Street Journal.8
Understandably, federal, state, and local governments are trying to support the restaurant industry during this difficult time. The federal government supported restaurant workers with extended and bonus unemployment benefits and it supported businesses through the Paycheck Protection Program (PPP) with $350 billion in April 2020 and $284 billion in December 2020.11 Of course, state and local governments, most of which have balanced budget rules12 (and none of which can print its own currency), are unable to serve as lender or insurer of last resort. Good intentions — the desire to help local restaurants — have unfortunately led some states and cities to adopt a shortsighted and counterproductive policy response: price controls.
San Francisco was one of the first cities to institute a commission cap on meal delivery services, limiting the fees they can charge restaurants to 15 percent.13 Seattle, New York, Washington, D.C., and other cities soon followed suit. As expected, the food delivery apps raised consumer fees in response. DoorDash added a $1.50 “Chicago Fee” to each order after the City Council capped restaurant commissions at 15 percent.14 Uber Eats added a $3 “City of Portland Ordinance” surcharge after the city imposed a 10 percent commission cap.15 In Jersey City, in response to a 10 percent commission cap, Uber Eats added a $3 fee and reduced the delivery range for restaurants.16
To understand why these measures haven’t achieved their stated aims, and why they will likely continue to have unintended consequences, first we need to understand what price controls are and the limited contexts in which they are effective.
WHY PRICE CONTROLS ARE USUALLY BAD
A price control is a government mandate that firms in a given market cannot charge more than a specified maximum price for a good or service (e.g., rent control for apartments) or they cannot charge less than a specified minimum price for a good or service (e.g., minimum wage for labor). Governments usually implement price controls with a noble aim of reducing costs of essential goods (e.g., shelter, fuel, food, etc.) for low-income people or supporting the revenues of a favored industry (e.g., price supports for farmers).
Policymakers tend to justify the imposition of a price control by arguing that the unrestrained forces of supply and demand will not ensure an equitable distribution of resources in essential markets. For politicians seeking to retain their jobs, price controls have the added benefit of being “off-budget,” meaning elected leaders don’t need to raise taxes to pay for them. While the costs of price controls may be unseen from a budgetary perspective, they are certainly not zero. Consumers, workers, and businesses are harmed by the lost output due to shortages under a price ceiling and excessive output under a price floor.
As Fiona Scott Morton, a professor of economics at Yale University, wrote, “If government prevents firms from competing over price, firms will compete on whatever dimensions are open to them.”17 And there are a multitude of dimensions beyond price. In response to price controls during World War II, hamburger meat producers started adding more fat to their burgers. Candy bar companies made their packages smaller and used inferior ingredients. During WWI, consumers who wanted to buy wheat flour at official price often had to buy rye or potato flour too.18
Generally speaking, after rent control takes effect, landlords reduce their maintenance efforts on rent-controlled apartments.19 They also pull rental units from the market and either sell them as condos or let friends and family live in them. Landlords can also capture some of the original economic value of their rental units by adding a fixed upfront payment to rental agreements. When airfare prices were set by the Civil Aeronautics Board between 1938 and 1985, airlines competed on other non-price dimensions, including improving the meal quality and increasing the frequency of flights and the number of empty seats.
The stricter the price controls are, the more likely bribes and other black market activity will substitute for previous white market activity. Even worse, the black market has higher prices than the legal market because sellers need to be compensated for the risk of being caught and punished by the authorities. Queuing and rationing are also extremely common under price controls. Hugh Rockoff, a professor of economics at Rutgers University, explains how price controls on oil had this effect in the 1970s:
Because controls prevent the price system from rationing the available supply, some other mechanism must take its place. A queue, once a familiar sight in the controlled economies of Eastern Europe, is one possibility. When the United States set maximum prices for gasoline in 1973 and 1979, dealers sold gas on a first-come-first-served basis, and drivers had to wait in long lines to buy gasoline, receiving in the process a taste of life in the Soviet Union.20
Henry Bourne, an early twentieth century economist, perhaps summed it up best when describing price controls in France during the French Revolution:21
It was the honest merchant who became the victim of the law. His less scrupulous compeer refused to succumb. The butcher in weighing meats added more scraps than before…other shopkeepers sold second-rate goods at the maximum [price]… The common people complained that they were buying pear juice for wine, the oil of poppies for olive oil, ashes for pepper, and starch for sugar.
Indeed, price controls do not make competitive pressures magically go away; they merely get sublimated into other dimensions of competition — and those who abide by the spirit of the law are punished the most. The aforementioned problems are why economists dislike price controls and favor market-clearing price mechanisms. The Initiative on Global Markets (IGM) regularly surveys a group of leading economists on various questions of public interest. The questions related to different kinds of price controls have been quite lop-sided.
A 2012 survey about rent control asked the following question:22
Local ordinances that limit rent increases for some rental housing units, such as in New York and San Francisco, have had a positive impact over the past three decades on the amount and quality of broadly affordable rental housing in cities that have used them.
And here are the results:
A 2014 survey asked about surge pricing:23
A 2020 survey points to an alternative mechanism for achieving the efficiency benefits of high prices without incurring the distribution costs:
Governments should buy essential medical supplies at what would have been the market price and redistribute according to need rather than ability to pay.
THE EXCEPTIONS WHEN PRICE CONTROLS ARE GOOD
There are two general exceptions when the benefits of price controls might outweigh the costs. First, in markets with natural monopolies and static competition, price controls can prevent dominant incumbents from harming consumers by charging monopoly prices (and restricting output). This is generally how utilities regulation works in the US. Electricity, natural gas, water, and sewage are examples of natural monopolies. It would be highly inefficient to lay two sets of water, gas, or sewage pipes to every house. Similarly, it wouldn’t make sense to have two electrical grids that connect to every house.
There are also low risks to investment efficiency by imposing price controls on these services. We have very likely reached the end of history in terms of innovation in water, sewage, and natural gas. Firms don’t need the incentive
of large monopoly profits to invest in water innovation because it’s just water. The optimal number of competitors in these markets is likely one. Utility regulators work closely with these companies to set prices that allow the firms to recover their fixed costs while earning a reasonable but not extortionate profit.
As Noah Smith, a columnist for Bloomberg, pointed out recently, economists have warmed to one other type of price control over the last few decades: the minimum wage.24
And this shift has occurred for the same reason economists are less worried about price controls in utilities markets: lack of competition. Empirical evidence has started to pile showing significant monopsony power in labor markets, particularly in rural areas.25 As this annotated chart from Noah Smith shows, when a firm has monopsony power in a local labor market, a minimum wage can actually increase employment.
This isn’t the case in all labor markets, of course. Urban markets have much more competition for low wage workers than rural markets. And economists are still worried that a national minimum wage of $15 per hour might lower employment in many states.26 But modest minimum wage increases are a price control that economists feel increasingly comfortable supporting.
The other axis to consider in addition to competition is time. Is the price control permanent or temporary? In the event of natural disasters and public emergencies, price controls (such as price gouging laws) can be reasonable. The normal reason policymakers should allow prices to spike in response to surging demand is to incentivize more supply to enter the market. But in a period of days or a couple of weeks during a disaster, supply may essentially be fixed (due to lack of outside access to the affected market). For very limited periods of time, caps on prices can ensure that a fixed quantity of supply is not allocated merely on willingness to pay (which is often a function of wealth as much as preferences).
PRICE CONTROLS AND MULTI-SIDED PLATFORMS
Before we examine how price controls are likely to affect the food delivery market, let’s first review the basic business models in question here, because they are distinct from traditional markets with only one type of customer. Food delivery apps are operating what are known as multi-sided platforms or markets.
What’s a multi-sided platform?
First it’s important to understand network effects. There are direct network effects and indirect network effects. Direct network eff
ects are when a product becomes more valuable to an individual user as more total users start using it. The telephone is the classic example. A telephone is only valuable insofar as it can be used to call other people who also own telephones. Indirect network effects are when consumers derive value from a distinct group of users on a platform. For example, consider shopping malls. The shopping mall owner needs to appeal to tenants to ensure the mall has lots of attractive stores for shoppers. But stores only want to sign lease agreements for space in shopping malls with lots of shoppers. The shopping mall owner is in a sense a matchmaker for these two groups. Newspapers and magazines are another example from the analog era. Advertisers want to advertise in publications with a lot of readers and readers want to read engaging content at a low cost. Publishers bring readers and advertisers together in a mutually beneficial exchange.
Digital markets often have these indirect network effects, too. For example, drivers want to drive on ride-hailing apps with lots of riders and riders want to ride on ride-hailing apps with lots of drivers. It’s Uber and Lyft’s job to set the price schedule (the commission it charges drivers, incentives it offers drivers and riders) at the optimal level. The same is true for operating systems. App developers want to develop apps for platforms with lots of users and users want to use platforms with lots of apps. Ditto for video game consoles: video game developers want to develop games for consoles with lots of gamers; gamers want to buy consoles with lots of games. The charts to the right show which products and services have direct network effects, indirect network effects, or both.
One of the most important questions for the owner of a multi-sided platform is how to set the prices on each side of the market. Economic research shows that the platform owner should charge lower prices to the side of the market that has relatively elastic demand (meaning consumers are sensitive to price changes and will change their quantity demanded sharply) and higher prices to the side of the market that has relatively inelastic demand.27 The most elastic side should pay the lowest price, and often it makes sense to charge them below-cost prices (“free shipping” or “free delivery”). That’s the “subsidy” side of the platform. The side with the lower elasticity of demand is the “money” side. Generally speaking, consumers have a higher elasticity of demand and suppliers (e.g., drivers, merchants, developers, hosts, etc.) have a lower elasticity of demand.
What are the likely effects of a price control on a multi-sided platform?
Research from Rob Seamans and Feng Zhu studied how Craigslist’s entry into various local markets affected the classified ads business of local publishers.28 Remember, newspapers are also operating multi-sided markets. They need to attract a large number of readers so they can then attract a large number of advertisers. Most classified ads on Craigslist are free, so its market entry represented a marked increase in competition on one side of the publisher’s market. For publishers, this leads to “a decrease of 20.7 percent in classified-ad rates, an increase of 3.3 percent in subscription prices, a decrease of 4.4 percent in circulation, an increase of 16.5 percent in differentiation, and a decrease of 3.1 percent in display-ad rates.” The authors go on to show that “these affected newspapers are less likely to make their content available online.” Changes on one side of a multi-sided market ripple throughout the other sides.
While the research literature on multi-sided platforms offers some insight about what might happen in the event of a price control on one side of food delivery platforms, we can also just look at real world evidence to see what’s happening. According to a recent article in Protocol:
On May 7, Jersey City capped delivery app fees charged to restaurants at 10%, instead of the typical 15% to 30% many such platforms take. The next day, Uber Eats added a $3 delivery fee to local orders for customers and reduced the delivery radius of Jersey City’s restaurants.
Now, fewer people are ordering from the restaurants via Uber Eats and instead are shifting to other platforms, the company and the town’s mayor both confirmed to Protocol.29
When cities or states impose a price control on the commissions delivery apps can charge restaurants, they are unknowingly destroying the delicate balance platform owners have struck to attract enough consumers and suppliers on the platform to make the economics work. In cases where the government hasn’t capped commissions and fees across all sides of the platform, the first step for the app owner is to raise fees on consumers to make up for the lost revenue from the restaurant. But as mentioned earlier, the consumer side has a higher elasticity of demand than the restaurant side, so an equivalent price increase will disproportionately decrease demand on that side of the market.
Poorly designed price controls can also have a disparate impact on different business models in the same market. In the food delivery business, for example, there are two common business models with starkly different cost structures. Some companies merely aggregate online orders and leave the restaurant to handle final delivery on its own. The commissions for these services tend to be 15 percent or lower because the costs are much lower than full delivery services. Other services are full stack — they handle the transaction from the beginning of the order until it’s been delivered to the customer. These services charge higher commission rates (up to 30 percent) because paying drivers for their time and expenses is much more costly than merely aggregating online orders. Naive commission caps favor the aggregators over the full stack delivery service providers because the cap is usually non-binding on the low-cost business model. But that low-cost business model is also less innovative. Full-service delivery platforms are reducing transaction costs low enough to bring an entire new category of restaurants into the delivery market.
Price controls would also disproportionately hurt small restaurants. Large chains like McDonald’s negotiate commission rates as low as 15 percent with delivery platforms because they can offer a high, steady volume of orders as well as their own large marketing budgets.30 Smaller restaurants are riskier partners and therefore pay higher commission rates — meaning price controls would disproportionately impact small restaurants. Commission caps might also lead to more vertical integration between restaurant chains and delivery services. Some large chains like Domino’s Pizza already employ their own delivery drivers.31 If enough cities and states implement price controls on third party delivery services, then more chains with high order volumes might decide to bring delivery services in-house to avoid the caps (because there are no commissions in a vertically integrated company).
So, what is the likely effect of these commission caps? Higher consumer fees. Longer wait times. Lower quality service. Reduced restaurant and delivery zone coverage. A switch from full service delivery apps to aggregators. And an increased incentive for the largest restaurant chains to vertically integrate with delivery services.
Lastly, it’s important to note that neither of the two exceptions for the general rule against price controls hold in this case. First, food delivery service markets are highly competitive.32 Most of the companies in this market haven’t been able to reliably turn a profit yet. As Eric Fruits, the chief economist at the International Center for Law and Economics, noted,
Much attention is paid to the ‘Big Four’ — DoorDash, Grubhub, Uber Eats, and Postmates. But, these platform delivery services are part of the larger foodservice delivery market, of which platforms account for about half of the industry’s revenues. Pizza accounts for the largest share of restaurant-to-consumer delivery.33
He goes on to point out that restaurants can also always offer their own delivery service, which serves as a check on the market power of third-party food delivery apps. And restaurants also have the option of apps like ChowNow, Tock, and Olo that offer online ordering as well at substantially lower commissions, largely because they do not offer delivery.
Second, the pandemic is a chronic rather than acute public health emergency. It is now entering its second year and we are still months away from readily available vaccinations for all groups. Price controls would reduce supply at a time when people desperately need delivery services to maintain social distancing.
CONCLUSION: A BETTER WAY FORWARD
While bailouts are never uncontroversial, bailing out the restaurant industry is an easy call. There is no moral hazard risk as there was with the bank bailouts in 2008, when it was reasonable to worry that bailed out financial firms would increase their risky behavior in the future knowing that they would be bailed out in the event of a crisis. In this case, restaurants won’t change their behavior in the future in a way that increases the odds of a deadly pandemic.
A viral pandemic is a perfect example of an exogenous shock — an Act of God (or “force majeure” as insurance contracts put it). By definition, the pandemic affects everyone. Private insurance markets don’t work for pandemics as well as they do for fires or natural disasters because a pandemic occurs everywhere all at once. The private insurance provider would be forced to pay out to all its insured entities simultaneously. Normally,
a majority of an insurer’s clients would be unaffected by an event and their premiums would be used to finance payouts for those harmed. In the case of a pandemic, everyone is harmed.
The federal government is the appropriate entity for collectively insuring the population against these kinds of macro-level risks. Using its fiscal and monetary capacity, the government can efficiently insure the entire population across time. Fiscal support comes in the form of deficit-financed spending (we’re effectively borrowing from our future, richer selves) and monetary support comes in the form of lower interest rates and guaranteed loans for businesses and state and local governments.
Deficit spending will need to be paid for in the future, either via inflation or taxes. But deficit-spending during a crisis is consistent with welfare-enhancing public policy. Income has diminishing marginal returns. In a time of crisis, we want to be able to borrow against our collective future income, which is exactly what deficit spending allows us to do. Just give people money — don’t mess with prices.