A Politically and Technically Feasible Approach for Handling Regulatory Accumulation

Regulatory accumulation threatens the pace of innovation and growth in America, yet previous attempts to address it have proven unsuccessful. That is why we propose a new approach through the creation of a Regulatory Improvement Commission, which we argue is both politically and technically feasible. This institutional innovation for paring down redundant and outdated rules is described more fully in a 2013 paper we co-authored, and it has now been introduced as very similar bills in both the Senate (by Senators Angus King (I-ME) and Roy Blunt (R-MO)) and the House (by Representatives Patrick Murphy (D-FL), Mick Mulvaney (R-SC), and 20 co-sponsors).

Each President since Jimmy Carter has ordered agencies to do a “retrospective review” of existing regulations in order to identify those that are duplicative, obsolete, or have failed to achieve their intended purpose. However, as a 2007 U.S. Government Accountability Office(GAO) study indicated, these retrospective reviews have fallen well short of identifying problematic regulations for a variety of reasons, including insufficient transparency and a lack of resources. It is extraordinarily expensive and time-consuming to properly evaluate the costs and benefits of any substantial part of any major regulation. Ultimately, an agency has no control over the original enabling legislation as written by Congress.

Rather than getting wrapped up in ideological issues such as big versus small government, we view the question of regulatory accumulation as a problem of institutional design. There is a well understood political and technical process for the creation of a regulation that involves both the executive and legislative branches of government. Presented in the simplest terms, the process starts with the approval of legislation by the House and Senate, which is then signed into law by the President. Next, the appropriate agency goes through a specified rulemaking procedure, which includes soliciting and answering public comments. For significant rules (those expected to have an annual impact on the economy of $100 million or more), agencies must also get approval from the Office of Management and Budget.

Although the process for new rulemaking is well specified under current law, our regulatory system offers no well-defined process for undoing or improving a specific regulation after it has been adopted. The only real option is to jump through the full set of political and procedural hoops described above that created the original regulation.

Our proposal for a Regulatory Improvement Commission (RIC, or the Commission) takes a more streamlined approach. Modeled after the Base Realignment and Closure (BRAC) Commission, the RIC would be approved by Congress for a limited period of time. The Commission would be staffed primarily with personnel “borrowed” from federal agencies, and RIC members would be appointed by the President and the congressional leaders of both parties. Further, the Commission would have clear objectives, be completely transparent, and follow a strict timeline.

The Commission would focus on a limited list of regulations – say, 15 or 20 – to be considered for repeal or improvement. It would base its proposals on suggestions submitted through public comment, coupled with public testimony and a quantitative and qualitative assessment of the rules in consideration. The RIC’s list of proposals would then go to Congress for an up or down vote with no amendments, and finally to the President for approval.

By including both the legislative and executive branches in reviewing regulations, the RIC can adopt a streamlined process for the consideration of regulatory changes. In addition, the Commission would not break or change the current process for creating regulations, nor would it raise any constitutional questions. All it would require is enabling legislation and some attention to internal congressional rules.

Our proposal acknowledges the importance of politics in the regulatory process. Ultimately the basis for regulation rests on enacted legislation, which is the result of a long and complicated political process. Cost-benefit analysis alone, no matter how persuasive, cannot overcome legislative action.

Perhaps most important in the current political climate, the proposed Regulatory Improvement Commission should be acceptable to both Republicans and Democrats because it gives Congress “two bites” at the apple. The first bite is when the original enabling legislation for the Commission is passed. Initially, Congress may opt to keep certain regulations that are particularly controversial off the table, such as environmental regulations.

The second bite comes when the proposed package of regulatory changes goes to Congress for approval. If the package does not appropriately balance the interests of both Democrats and Republicans, Congress can vote the package down.

Importantly, the RIC would help build trust in the retrospective regulatory review process. Like the BRAC Commission, the proposed Regulatory Improvement Commission is a one-shot deal that must be re-authorized by Congress. If the initial Commission is successful, Congress may be more willing to authorize it again.

The Regulatory Improvement Commission can be compared to something that sounds superficially similar: the SCRUB Act, which stands for the Searching for and Cutting Regulations that are Unnecessarily Burdensome Act and was recently discussed by a House subcommittee. The SCRUB Act would set up an independent commission to review regulations and forward proposed changes or repeals to Congress. However, under the SCRUB Act, the regulatory changes would go into effect unless Congress passed a resolution rejecting them.

We view the SCRUB Act commission as both politically and technically infeasible compared to the Regulatory Improvement Commission. Politically, it would be impossible for Democrats to approve any commission that possesses effectively unlimited powers to undo regulations. Additionally, the SCRUB Act raises certain constitutional issues, such as the delegation of legislative authority to a commission, that are difficult to surmount. For these reasons, we view the Regulatory Improvement Commission as far more likely to be effective than the independent commission proposed in the SCRUB Act.

Institutional innovation requires both a willingness to believe that things can be different and pragmatism about what is possible. It is clear that modern economies require some way of pruning down regulatory accumulation. The Regulatory Improvement Commission would be a first step in that direction.

This post was originally published on the University of Pennsylvania’s RegBlog, you can read it on their website here.  It is part of RegBlog’s five-part series, Debating the Independent Retrospective Review of Regulations.

Letting Innovation Out of the Box

Innovating in the digital age requires flexible rules that keep pace with the latest technology. This is especially true in the video services market, where change has been fast and furious. That’s why Congress should act to repeal an expensive and innovation-restricting requirement on the design of set-top cable boxes.

Currently the FCC mandates that each cable box — the electronic device in your home that links your TV with your cable provider — use a particular type of technology known as a “CableCARD” that contains the security mechanisms needed to receive programming. The FCC’s rule, formally known as the “integration ban,” requires that these security functions cannot be hard-wired or otherwise integrated within the rest of the box.

The CableCARD requirement is a classic example of prescriptively regulating in order to reach a certain outcome. In this case, the desired outcome was competition, to create a retail market for set-top boxes. In its order the FCC stated the integration ban would “result in a broad expansion of the market for navigation devices so that they become commercially available through retail outlets.” The idea was for customers to buy cable boxes instead of leasing them, a universal box that could be used across providers with a removable CableCARD. It would be a marketplace similar to telephones.

But the intended outcome, a retail market for set-top boxes, never developed. Consumers just didn’t want to mess around with another piece of electronics that could potentially become outdated or incompatible. Instead, the overwhelming majority of consumers lease their set-top boxes through their cable provider. Moreover, an increasing number of consumers access digital programming from smart devices outside of traditional TV, such as tablets and smartphones. And, perhaps most telling, the introduction of YouTube, Hulu, Netflix, Amazon Prime and other delivery channels created different ways to access digital programming without a cable box. Recent research shows more people are cutting the cord on their cable subscription, particularly young people that are a key customer-building demographic.

The integration ban may have been well-intentioned, but the rule now accomplishes little more than impose old technology onto cable providers and consumers. Innovators are in the process of developing a “boxless” method of delivering cable service, where all control and delivery processing occurs in the cloud. But that requires flexibility in the evolution of box design, rather than the current rigid set-top box integration ban rule. Cable customers ultimately pay the price of the ban by missing out on the potential pace of innovation.

Fortunately there is an opportunity to repeal this outdated rule, with fresh momentum in Congress. The House will likely pass the repeal with bipartisan agreement, and it is now up for discussion in the Senate. Importantly, the current proposal preserves the CableCARD standard for use in retail devices like the TiVo, only affecting how these security features are embedded in boxes leased from the cable company. Customers can therefore continue to purchase their boxes, and retail sales could still become a bigger share of the set-top box market if that is the direction in which it evolves.

The world of digital programming no longer revolves around cable companies. It is time policymakers recognize the new face of competition in the video industry and let the tremendous pace of investment and innovation speak for itself. Repealing the set-top box integration ban would be a small but positive step forward in modernizing regulation for the data-driven economy.

This op-ed was originally posted by RollCall, you can read it on their website here.

PPI President Joins Bipartisan Group of U.S. Representatives to Unveil Regulatory Improvement Commission Proposal

WASHINGTON—Progressive Policy Institute (PPI) President Will Marshall today joined Representatives Patrick Murphy (D-Fla.), Mick Mulvaney (R-S.C.) and a bipartisan group of House members to unveil major regulatory reform legislation based on a proposal by PPI to tackle regulatory accumulation, the harmful layering of new federal rules atop old rules year after year.

The Regulatory Improvement Act of 2014 (H.R. 4646) would establish an independent advisory body authorized by Congress—the Regulatory Improvement Commission (RIC)—to review, remove or improve existing outdated, duplicative or inefficient regulations as submitted by the public. The legislation is identical to a Senate companion bill (S. 1390) introduced by Senators Angus King (I-Maine) and Roy Blunt (R-Mo.).

“Regulatory overload is suffocating economic growth and stifling innovation in the United States,” said Michael Mandel, PPI Chief Economic Strategist. “Regulations are essential for a well-functioning economy, but the federal government needs a systematic mechanism for improving or removing regulations that have outlived their usefulness. The RIC would effectively ‘scrape the barnacles off the bottom of the boat’ and allow our nation’s businesses to move forward on innovating and hiring workers.”

Originally conceived by PPI economists Michael Mandel and Diana Carew, the RIC is modeled after the highly successful military base-closing commission. It would consist of nine members appointed by Congressional leadership and the President to consider a single sector or area of regulations and report regulations in need or improvement, consolidation, or repeal.

Both Houses of Congress would then consider the Commission’s report under expedited legislative procedures, which allow relevant Congressional Committees to review the Commission’s report but not amend the recommendations. The bill would then be placed on the calendar of each chamber for a straight up-or-down vote.

Following its report, the RIC would be dissolved and must be re-authorized each time Congress would like to repeat this process to avoid the creation of a new government bureaucracy.

###

A Fresh Approach to International Investment Rules

Money makes the world go round. Although money flows are global, the rules governing investment are bilateral and regional. Cross-border investment is governed by a patchwork of over 3,000 bilateral investment treaties (BITs), regional and bilateral trade agreements (FTAs) with investment chapters, as well as the trade-related investment provisions of the World Trade Organization. While many states have signed international investment agreements (IIAs), they do not cover all states, investors, or categories of investments. Taken in sum, these IIAs have many problems, including:

  • The 3,000-plus IIAs vary significantly and do not offer clear and uniform guidelines to protect international investment.
  • Tribunals have no effective means of enforcing their decisions.
  • Some investors and states take advantage of the hodgepodge of rules to “game the system” through forum-shopping and other strategies.
  • Investors are increasingly challenging government regulatory or budgetary policies that reduce the value of their investments as “indirect expropriations.”
  • Citizens in the United States, EU, and other countries are increasingly critical of the balkanized, uneven investor-state arbitration process.

We believe it is time for a fresh approach to international investment agreements: one that builds a more universal, consistent, and accountable system. In this policy brief, we put forward three concrete steps that can promote and protect foreign investment, advance the rule of law, preserve the ability of governments to regulate, and link trade and investment.

Step 1: At the behest of the G-20, the WTO and international organizations with investment competence should establish a committee of experts to develop a code of norms and best practices. G-20 members should use this code as a template for future investment agreements and encourage all WTO member states to sign up.

Step 2: WTO members should set up an Investment Appellate Body to review and if necessary, override controversial arbitrations where the rights of investors or governments were inadequately protected. The Investment Appellate Body will stand beside the WTO’s Trade Appellate Body.

Step 3: To give the Investment Appellate Body teeth, one or more WTO member states should ask the WTO Secretariat to explore the feasibility of using trade policy to retaliate against states that fail to comply with its decisions.

Download the complete report.

How to Lobotomize the Internet

At first glance, the recent decision by Europe’s top court to enforce the “right to forget” for personal information seems unconnected to economic growth.  After all, if a young adult asks a search engine to delete links to indiscreet teenage pictures, what harm could that do to GDP or living standards?

But here’s the problem: Once search engines such as Google are require to set up a large-scale mechanism by which links to personal information can be deleted,  history suggests that it will be all too easy to use the same mechanism for deleting links to other information as well.  Unpleasant historical information—gone.  Information that offends some powerful politician—gone.  Technical knowledge that challenges a powerful incumbent company—gone.

And with no links, it’s as if the information isn’t there.

The Internet is the greatest engine for the replication and spreading of knowledge that the world has ever seen.  As such, it is also the greatest engine for global growth. An technological or institutional advance made in one country can spread nearly instantaneously around the world.

Forcing search engines to delete links wholesale is like lobotomizing the Internet.  Go far enough down that path, and the spread of knowledge will stutter and global growth will slow. Is the gains from “right to forget” worth the pain? 

What’s Missing from the Net Neutrality Debate? A Principle for Designing Good Remedies

The reaction from netizens was swift and fierce: Chairman Wheeler’s proposal to permit paid prioritization on the Internet—with an offer to stamp out discriminatory conduct on a case-by-case basis—was considered a betrayal of President Obama’s net-neutrality pledge. Protesters gathered in front of the Federal Communications Commission (FCC ), and petitions made the rounds on Twitter.

The grassroots campaign for reclassifying Internet services from “information” to “telecommunications” served its purpose: The Chairman has essentially changed his position and has put the agency on the path to embracing a more invasive “Title II” approach. In economic terms, this means that the regulator could establish a “zero price” for paid prioritization. And when its price goes to zero, priority delivery will cease to exist.

This is not the agency’s first attempt to regulate paid prioritization out of existence. In a recent decision to toss the FCC’s original Open Internet rules, the D.C. Circuit said that interfering with market negotiations by setting a zero price amounted to “common carriage” regulation, which was legal only if the FCC reclassified Internet service (and then established a regulated rate of zero). Continue reading “What’s Missing from the Net Neutrality Debate? A Principle for Designing Good Remedies”

European Regulators Take Aim at U.S. Tech Companies

Even as U.S. and European officials negotiate a major free-trade agreement, a new form of protectionism has surfaced across the Atlantic. Some in Europe are pushing for policies that would limit the free flow of data over the Internet. The calls for regulating the Web are also a symptom of a broader problem—a languishing economic partnership between the U.S. and Europe.

Most pernicious is the proposal to create a “European cloud,” a communications network that would prohibit data from traveling to servers outside the continent. Germany’s largest telephone company Deutsche Telekom AG DTE.XE +0.86% is an outspoken advocate of the idea, ostensibly hoping to quell privacy concerns. This would mark the end of an open, global Internet, which has been an incredible engine of economic growth. In an April report, the U.S. Trade Representative called the concept “draconian” and a way to give a “protectionist advantage” to companies based in the European Union.

Why the tension? America’s relationship with Europe, for one, has taken a back seat as President Obama pivots to Asia and Congress’s old internationalist consensus erodes. Europeans feel neglected by Washington, and they made those sentiments known when in April the organization I head, the Progressive Policy Institute, led a bipartisan group of congressional staff and policy analysts to Paris, Brussels and Berlin for talks with government, business and opinion leaders. The key take-away: The White House and Congress should make reinvigorating the trans-Atlantic partnership a top priority.

The specter of slow economic growth haunts both the U.S. and EU. Since 2000 U.S. gross domestic product has grown less than 2% a year on average. As Europe emerges from the euro-zone debt crisis, projected growth this year across the 28-member EU is 1.5%. This has produced middle-class stagnation in both places: In America, the per capita median income is no higher than in 2000. In Germany, the EU’s economic powerhouse, median income has inched upward by 1.4% in the same period.

In short, the West needs to tap new sources of economic growth. Liberalizing trade—the purpose of the continuing Transatlantic Trade and Investment Partnership talks—will help. But there’s another way to unleash much-needed growth, and that’s by capitalizing on our shared comparative advantage in digital innovation.

There is currently a “data gap” between the U.S. and Europe, as a recent study by my organization and the Brussels-based Lisbon Council found. Americans use more than three times as much data per capita as Germans and six times as much as Italians. If you remove consumer video usage and limit the sample to business data, the U.S. advantage shrinks but remains substantial. Europeans have been slower to embrace data as a new factor of production and driver of global growth.

Instead of trying to catch up, however, some European regulators have elected to punish big U.S. tech companies for alleged privacy violations. Spain slapped Google GOOGL +0.51% with a roughly $1.2 million fine in December, and France demanded about $200,000 from the company in January. The European Parliament has passed new data-protection regulations that would impose stringent new rules on companies doing business with Europeans anywhere.

Some U.S. companies have indeed failed to be responsible stewards of customer data. But the privacy rift is more a matter of cultural differences: Europeans regard privacy as a “fundamental human right,” while Americans are more inclined to let companies collect personal data in exchange for access to the Internet’s boundless information. U.S. tech companies are especially worried that a too rigid approach to privacy threatens the use of “big data” analytics to improve services and raise productivity.

Former National Security Agency contractor Edward Snowden‘s intelligence revelations have made matters worse. After it was leaked that German Chancellor Angela Merkel‘s cellphone may have been monitored by the NSA, she endorsed the idea of building a European communications network to keep data from passing through the U.S. A European cloud would no doubt be great for European cloud providers, but it would raise the cost of accessing information for consumers and businesses. The same is true for “data localization,” a popular concept that would require companies to build servers in the countries they do business in.

In March the European Parliament passed a nonbinding resolution to suspend agreements with the U.S. on bank data-sharing and Internet privacy. EU Justice Commissioner Vivian Redding is now calling for revisions in the “Safe Harbor” principles, an agreement between the U.S. and EU that gives roughly 3,000 U.S. companies the right to repatriate the personal data of Europeans, as long as the company allows users to opt out and limits collection to “relevant” information.

Upending this agreement would devastate U.S. Internet companies, whose existence relies on data collection. But ultimately the steps toward Balkanizing the Internet would damage European companies as well. The Internet enables small- and medium-size firms to compete in global markets previously unavailable to them. Europe will also never cultivate a startup culture with a restricted Internet.

The U.S. and Europe’s differences are considerable but bridgeable. Nobody wants a trade war as a newly aggressive Russia lurks in the background. But President Obama must offer more forceful leadership to finish TTIP and to offer reassurances on surveillance, data protection and privacy. That would go a long way in fostering the trans-Atlantic cooperation that has atrophied since the Cold War ended.

This article was originally published in The Wall Street Journal, find it on their website here.

Roll Call: Manufacturing’s Comeback: Numbers Fabricate a Complicated-Yet-Rosy Outlook

Michael Mandel, PPI’s chief economist, was quoted in David Harrison’s article for Roll Call on unreliable manufacturing data.  The article explored the credibility of recent Commerce Department statistics showing that the manufacturing sector has returned to its pre-crash value.  Mandel explained how the government provided data painted a complex evolving industry with a broad brush, making it difficult to assess the actual state of affairs:

You think your manufacturing is growing, but it may be shrinking, and you don’t know the right places to apply policy levers,” said Michael Mandel, an economist at the Progressive Policy Institute who has spent years studying these issues. “At this point, in manufacturing we’re flying blind.

Read the full article on Roll Call’s website, here.

Why Curing Cancer Will Boost Productivity

An article in Forbes trumpets a new approach to actually curing cancer using the immune system.  The article itself, entitled “Is This How We’ll Cure Cancer?” is worth reading, though a healthy dose of skepticism is important.

However, from my perspective, what’s important are the  quotes about cost towards the bottom of the story

“You’re going to see these companies that are going to get crushed by this new environment, despite the fact that health care spending is going to almost double,” Jimenez says. He has a team actively exploring new ways of pricing cancer drugs, in which several medicines are sold for the price of one, or health systems or insurers pay based on how many patients are cured.

The real question is not cost, but productivity. And I don’t mean the productivity of cancer patients, but the productivity of the whole economy.

Right now we have a healthcare system where millions of person-hours, many of them highly skilled,  are thrown into treating and providing palliative care for cancer patients. These skilled person-hours are not available for production or research in other parts of the economy.

A drug that cures a major form of cancer will almost certainly boost what we call “gross medical productivity,” defined as the size of the population divided by the number of healthcare workers.  And rising gross medical productivity will free up skilled labor for other forms of non-healthcare production, which will in turn boost non-health GDP.

So even if a cure for a particular type of cancer seems expensive measured in dollars,  it may seem imminently reasonable when measured in terms of its impact on the nation’s productivity and total output.

 

 

The Case for Pro-growth Progressivism

The many unanticipated events seen over the past 20 years should leave us with some humility about our ability to identify coming political and economic challenges. Will the next two decades be marked by surplus labour due to joberoding technological change, or by labour shortages in countries with stagnant and ageing populations? Will wealth be concentrated in fewer and fewer hands, or will economic success become dispersed as smart people around the world take advantage of the Internet to create new businesses? Will the global economy stay volatile or return to the moderation that predated the financial crisis?

Here is what we do know: the developed, richer countries are undergoing a collective crisis of faith, rooted around slow growth. The major advanced countries have seen their real per-capita GDP rise at a depressingly low 0.8% rate over the past ten years. By comparison, the annual rate of growth was 1.8% in the ten years ending 2007 and 2.6% in the 1980s. A slow, halting recovery has left many citizens pessimistic about their country’s economic prospects and their government’s ability to be effective. And that pessimism, in turn, has helped nurture an ugly bestiary of political and policy dysfunction, from Washington to Brussels to Tokyo. The result is a loss of political flexibility and adaptability, a painful ossification that emphasises protecting the status quo rather than embracing innovation. This painful ossification leaves us more vulnerable to the next crisis or challenge, whether political, technological, or biological.

1. Pro-growth progressivism
Economics and politics are intertwined in a mutually reinforcing crisis of confidence. Tackling this deficit of trust is job one for the world’s political leaders. Rather than just hope the global economy picks up, progressives should coalesce behind an ambitious plan to accelerate growth, boost innovation and revive upward mobility. The goal is to produce a flexible, dynamic economy that can deal with whatever challenges arise. We deserve better than a choice between the right’s anti-government populism and the left’s anti-business populism. That’s why we need ‘pro-growth progressivism.’ Pro-growth progressivism will take on varied forms in different countries, but it typically has these features.

2. Focus on growth, rather than redistribution
The slow-growth figure cited above was for GDP, which includes not just wages but returns to capital as well. That means even before we get to the question of distribution, the whole economic pie is growing at an extremely slow rate. Absent more robust growth, the politics of redistribution becomes an empty exercise in moral posturing. Moreover, a narrow focus on ‘fairness’ may misdirect resources that otherwise could be used to enlarge the nation’s productive base. It also fosters an ‘us versus them’ mentality that, by reinforcing polarisation, can only make it harder to build consensus around economic initiatives that benefit everyone.

Without growth, the developed countries can’t generate sufficient national income to simultaneously finance public investment in world-class infrastructure, science and skills; and, meet the soaring health and retirement costs of an ageing society. What’s more, we don’t have the resources to deal with unexpected crisis or challenges. That’s why restoring economic dynamism must be progressives’ top priority. Putting the cart of redistribution before the horse of economic growth turns politics into a zero-sum fight over a shrinking pie. This approach might win an election here or there, but it is not a durable foundation on which to build and sustain progressive majorities.

3. Put a priority on investment, rather than consumption
An essential ingredient for encouraging growth is investment in physical, human, and knowledge capital. Investment is spending on the future. By contrast, consumption, almost by definition, is about the resources devoted to today’s needs and pleasures. But these are just the economic definitions – investment and consumption also represent different attitudes towards the future and towards the next generation. Right now the developed countries are ageing in a way that has more and more older citizens being supported by a slow-growing or even shrinking working-age population. To make that work, the younger generation needs updated infrastructure, the newest equipment, the best education and training, and the most innovative technologies – and all these require resources.

4. Encourage innovation and entrepreneurship, rather than business as usual
In addition to investment, true growth needs innovative ideas and technology and the entrepreneurs to put them into practice. And it needs government to encourage innovation, or at least not get in the way with excessive regulations. One good example: The Internet of Things – the idea that in the future, all objects will be interconnected and have the ability to automatically transfer data without requiring human-to-human or human-to-computer interaction. The Internet transformed digital industries, which represent about 20% of the economy. But the Internet of Things has the potential to transform physical industries, such as manufacturing, transportation, public service and healthcare, which represent the other 80% of the economy. The Internet of Things could make an enormous difference in growth and job creation – but only if excess government regulations don’t make it too expensive or time-consuming to put into place.

5. Mobility flows from innovation and growth
One of the great advantages of growth and innovation is that they create more opportunities for true mobility. That’s certainly true in the United States, where growth in the tech/info sector has drawn more minorities into well-paying jobs. From 2006 to 2013, the number of Hispanics in computer and mathematical occupations rose by 58% and the number of blacks rose by 41%.

6. Foster innovation in government
Government has an essential role to play in pro-growth progressivism, providing the necessary safety net and the regulatory structure that makes the economy work. At the same time, progressives must focus on making government work better, so that it’s felt as a positive force in people’s lives. One aspect of innovation in government is an emphasis on flexibility, rather than strict rules. In the United States, PPI has proposed a Regulatory Improvement Commission, which would focus on making regulations better and more flexible, rather than doing away with them altogether.

 

This article was originally published by Policy Network, please find the original article here.

Forbes: How to Ease the Crushing Costs of Federal Regulation

Michael Mandel was a guest speaker on Bill Frezza’s RealClear Radio Hour to discuss his proposal for a Regulatory Improvement Commission (RIC).   Frezza’s interview and article dealt with the continuing costs of regulation drag on the U.S. economy and PPI’s proposal for a politically viable reform option.  Mandel’s radio interview was later quoted in an article Frezza wrote for Forbes.

“The cost benefit approach to fixing the regulatory problem is not going to work,” says Michael Mandel, Chief economic strategist at the Progressive Policy Institute, my second guest on this week’s RealClear Radio Hour. He is especially concerned about the way regulatory accumulation impedes innovation. “History tells us that innovation will allow us to deal with a lot of our concerns about the future of the earth in a different way. They are real concerns, but the path forward is more innovation rather than less.”

Mandel goes on to explain the accumulation of regulations over time, the difficulty of keeping up with technological advancements, and how RIC would address these challenges.

Listen to the full interview on RealClear‘s website here and read the full article on Forbes website here.

PPI Abroad: Digital Trade Study Group Recap

Last week (April 22-25), PPI returned to Europe for an intensive round of high-level meetings and one big public event in three capitals, Paris, Brussels and Berlin. It was the third PPI project in Europe in the last 18 months, a sign of our commitment to increasing awareness about the rise of the data-driven economy and its implications for policymakers.

PPI has long been a catalyst for transatlantic dialogue since we helped Bill Clinton and Tony Blair launch the “Third Way” conversations among progressive leaders. Most recently, our work in Europe has centered on measuring the volume and economic value of cross-border data flows.

Our focus last week was mainly on digital trade, and the need to fend off some truly bad proposals in Europe that would at a minimum erect barriers to cross-border data flows, and at a maximum balkanize the Internet by creating an exclusively European cloud. At a time when both America and Europe are plagued by slow economic growth, any actions that would choke off digital innovation and trade make little sense.

To underscore the point, we led a bipartisan “Digital Trade Study Group” consisting of 10 senior Senate and House staffers with expertise in digital policy issues to Europe. They learned much about European attitudes toward data protection and privacy – including the emotional response to the NSA revelations, especially in Germany – and the presence of a bipartisan group of knowledgeable Hill staff impressed upon the European officials we met that Congress has a growing interest in resolving disputes over trade in general and digital trade in particular. From the feedback we’ve received, the trip was a big success.

Here are some highlights:

  • In Paris, the Study Group met with economic researchers with the Organisation for Economic Co-operation and Development (OECD). This meeting made clear that no one has developed a way to accurately capture and measure the contribution of data-driven innovation and trade to economic growth. As Michael Mandel, PPI’s chief economic strategist, has noted in a series of groundbreaking reports on the measurement challenge, this makes it difficult for policymakers to weigh the likely effects of new regulations. In a second session, the group discussed the OECD’s work on tax base erosion. The G-20 has tasked the OECD to explore ways to prevent international companies from sheltering profits and income from national tax collectors.
  • In Brussels, our traveling party met with several high-ranking EU officials who discussed the progress of the transatlantic trade agreement (TTIP), data protection and how Europeans view the crisis in Ukraine. Additionally, our group was briefed by U.S. officials on digital trade issues and received a preview of the upcoming European election.
  • Also in Brussels, PPI teamed up with the Lisbon Council for a major public event on these themes featuring EU and U.S. trade officials, as well as economists and representatives from European businesses. At that event, we released a joint report co-authored by PPI’s Michael Mandel and Lisbon Council’s Paul Hofheinz on “Bridging the Data Gap: How Digital Innovation Can Drive Growth and Jobs.” It highlighted a large and growing “data gap” between the U.S. and the EU that ought to give Europeans pause.
  • We concluded our trip in Berlin, where long-time PPI friend John Emerson, U.S. Ambassador to Germany, kindly hosted us for an insightful breakfast briefing on U.S.-German relations. Next the group met with a representative of Bitkom, the major association of the digital industry in Germany. Our last event was a dialogue with German political, business and intellectual leaders organized by Das Progressive Zentrum, a Berlin think tank. Focusing on the need to rebuild trust between America and Germany in the wake of the Snowden revelations, it was a blunt, intense and illuminating conversation.

This trip was extremely productive and highlighed that PPI is building an extensive network of European contacts and partners who share our commitment to finding common ground on questions of trade, digital innovation and stronger economic growth.

Missing the Mark on Federal Student Aid

Two of today’s biggest proposals aimed at solving the student debt crisis – refinancing and expanding income-driven repayment – are well-intentioned but miss the mark. Better reforms would address the real problem of an outdated postsecondary education system, and would use the billions already spent annually on federal student aid more effectively.

The first proposal, refinancing student loans, is not new but is gaining fresh momentum. Last week, Sen. Elizabeth Warren (D-Mass.) announced plans to introduce legislation in the coming weeks that will refinance all outstanding loans to today’s subsidized Stafford loan rates, or 3.86 percent. The costs of refinancing will be paid for by the controversial “Buffet rule” tax increase on millionaires.

Taxing the wealthy to reduce interest rates may help relieve high interest rate payments, but does nothing to address the underlying debt. The Department of Education informally estimates a refinancing scheme could cost upwards of $100 billion over 10 years, a high price to pay for a plan that does not hold schools accountable. And if the ability to refinance extends into the future, it would be cheaper to just offer all federal student loans at the subsidized Stafford rate.

Moreover, refinancing all student loans will mostly benefit graduate students and those with very high debt levels. This may not be the intended target for relief, as it allocates more funding to borrowers who made poor borrowing choices as opposed to those under the most financial stress. A more effective way to deal with graduate loans would consist of capping loan limits, and using those savings to launch a financial awareness campaign complete with labor market expectations data before people borrow.

The second proposal, being explored by Sen. Tom Harkin (D-Iowa), calls for making income-driven repayment (IDR – also known as “Pay As You Earn”) the automatic repayment plan for all borrowers. Under IDR, borrowers pay up to 10 percent of discretionary income regardless of loan amount, for a maximum repayment term of 20 years after which all outstanding debt is forgiven.

The costs of IDR are only beginning to come to light. New CBO estimates for expanding IDR as proposed in the president’s budget show an estimated cost of $8.2 billion over the next 10 years, although independent estimates range as high as $14 billion annually. Because this form of IDR is so new, the true cost will likely depend on programmatic details and implementation.

Similar to refinancing, IDR stands to benefit graduate students and high debt borrowers the most. In fact, there have already been documented cases of IDR abuse. Without accountability, expanding what was designed to be a temporary solution for the neediest borrowers simply hastens the transfer of the crisis of college affordability from the borrower to the taxpayer. It does nothing to address rising tuition. This suggests a one size fits all approach for loan repayment, or at least IDR in its current form, may not be the most cost-effective.

A better approach to reforming federal student aid would help students young and old by modernizing the postsecondary education system. Right now, there are too few acceptable pathways into the workforce after high school. The mentality that everyone needs a four-year degree is not only exacerbating the strain on the federal aid system, but it is enabling poor performing postsecondary institutions to stay in business.

One possible way to target industry reform is by experimenting with Pell Grants. The scope of Pell Grants could be expanded to include more non-traditional training programs. This would encourage alternative pathways into the workforce, for example, apprenticeships or online-based credentials.

Another experiment could be to change the eligibility criteria for schools receiving Pell Grants. Grant money can be block allocated to schools by completion and post-graduate success metrics to start; later allocated by success metrics of recipients. This forces us to collect data on grant recipients over time while encouraging schools to better serve students. Such experiments can be done cost-effectively, by tightening eligibility criteria, reducing the eligibility period, and tying individual funding to progress benchmarks.

The effectiveness of Pell Grants is currently unknown, leaving room for improvement. We do know that emphasis on enrollment over completion and post-graduate success does little to help the recipient or taxpayer in the long run. Yet the program has a significant footprint: recent research shows about 9.4 million students received Pell Grants in 2012, up from 5.2 million in 2006. According to the CBO, the cost of Pell Grants simultaneously skyrocketed by 168 percent.

The interconnectedness of federal student aid programs with the postsecondary education system should be seized by policymakers as an opportunity. Focusing on effectively using existing funds to help borrowers, and modernizing the postsecondary education system, is the best way to address the student debt crisis now.

This Op-Ed was originally published in The Hill.

Has The FCC Made Its Case To Restrict Certain Bidders In The Broadcast-Spectrum Incentive Auction?

FCC Chairman Tom Wheeler recently signaled that his agency is considering certain bidding restrictions for the upcoming broadcast-spectrum auction that are specifically targeted at the two largest nationwide providers. At some ill-defined point in the auction, the restrictions reportedly would be imposed on any bidder that has more than one third of the available “low-band” spectrum in a market.

And guess who holds more than one-third of “low-band” spectrum in any particular market? AT&T T +0.79% and Verizon. As a result of the proposed restrictions, between 40 and 50 percent of the spectrum blocks in a given band plan would be off limits for the two mobile broadband companies best positioned to battle cable modem providers. Is this a good thing?

The best policy justification for bidding restrictions in an auction is the presence of monopoly power. The theory is that a monopolist is willing to pay more to cement its position than a rival is willing to pay to displace the monopolist. Although the auction might cause the monopolist to surrender a good portion of its profits to the auctioneer, at the end of the day, consumers are still beholden to monopoly prices. (The second best justification for a restriction is that there is something special about “low-band” spectrum—without it, smaller carriers cannot compete effectively. I have rebutted this justification here.)

A review of the evidence suggests that no wireless carrier is exercising monopoly power—that is, setting prices above competitive levels or restricting output.

Recent price cuts in response to T-Mobile’s “Uncarrier” initiatives and no-contract plans have put downward pressure on wireless margins. In February, AT&T cut its Mobile Share shared data plan prices (with 10 GB of data) to $160 per month for four phone lines; in response, Verizon matched that pricing in April. In March, AT&T alsocut the price of its smaller shared data plan (with 2 GB of data) by $15 to $65 per month for one phone line. These pricing episodes are hardly consistent with the notion of monopoly power.

Perhaps these recent price cuts mask a longer trend of rising prices? Not so. According to the FCC’s 2013 Wireless Competition report, competition is robust:

  • Monthly average revenue per unit (“ARPU”) for wireless service declined from $48.04 in 2006 to $46.63 in 2012; wireless voice revenue per minute has declined from $0.06 to $0.05 over the same period.
  • Voice revenue per minute in the United States ($0.033) is less than one third of the European average.
  • U.S. mobile subscribers talked an average of 945 minutes per month on their mobile phones in 2011, compared with 134 minutes in Japan and 170 minutes in Western Europe.
  • And the United States has the second least concentrated market structure in a Bank of America BAC -0.66%survey of ten countries, behind only the United Kingdom.

After 408 pages of excruciating detail on the state of wireless competition, the FCC is hard-pressed to identify any data consistent with monopoly power. And without a showing of monopoly power, the social benefits of these bidder restrictions are likely insignificant.

On the other hand, unwarranted restrictions can inflict significant losses on society in three important ways.

First, assuming AT&T even shows up to the auction, prices on the restricted blocks will be significantly less than the prices in the non-restricted blocks. Although there is some chance that prices in the non-restricted blocks could be higher (due to the artificial scarcity created by the restrictions), the FCC is exposing itself and the taxpayer to a considerable risk of diminished auction revenues—revenues needed to fund deficit reduction, build-out of an interoperable public-safety network, and other priorities enumerated in The Middle Class Tax Relief and Job Creation Act of 2012. And auction revenues are needed to compensate broadcasters interested in giving up their spectrum. The amount of the broadcast spectrum that will be available for reallocation to wireless broadband will depend critically on the broadcasters’ perception of auction prices; the law of supply dictates that there will be less spectrum available for sale the lower the expected price.

Second, by setting aside valuable spectrum, the FCC is creating an attractive opportunity for firms to engage in regulatory arbitrage. Set asides will encourage firms not interested in building networks but instead buying spectrum to flip it later for a windfall. History has shown that set-aside spectrum sits fallow for years, staving off the sort of broadband deployment that Congress desires. Competition for the arbitrage opportunity leads to wasteful “rent seeking” activity, which represents another loss. Allowing the carrier that will ultimately deploy the spectrum to purchase it immediately and directly (rather than through a wasteful and superfluous middleman) is clearly the more efficient choice.

Third, efficiency dictates that spectrum goes to the wireless carriers that value it the most. If an incumbent carrier facing a spectrum crunch is willing to pay more for the next chunk of available spectrum than an entrant, assigning the spectrum to the entrant represents a misallocation of society’s resources. Relatedly, a significant challenge facing the FCC is injecting competition into the broadband marketplace. According to the FCC’s most recent data, a full 19 percent of U.S. homes were beholden to a single provider of broadband service (including wireless operators) capable of delivering download speeds of 10 Mbps. Wireless broadband could impose significant discipline on cable operators in these pockets if the FCC opens up the spectrum spigot to all firms, as opposed to parsing out thin slices to smaller companies.

In sum, the FCC has failed to meet its evidentiary burden for the use of bidding restrictions as currently proposed for the upcoming incentive auctions. There is no compelling evidence of monopoly power in the wireless sector. And there has been no attempt to prove that smaller carriers need access to “low-band” spectrum to compete effectively against their larger competitors. Until those burdens are met, the FCC should let the auction blocks fall where they may.

This article was originally published in Forbes, please find the original article on their website here.

Bloomberg: EU Risks Hurting Growth in Data Safeguard Effort, Study Finds

Rebecca Christie of Bloomberg wrote an article covering PPI’s transatlantic conference and paper release in Brussels last week. The paper, Bridging The Data Gap: How Digital Innovation Can Drive Growth and Jobs, aims to measure the costs of data protectionism  and knee-jerk reactions to NSA revelations may hurt European economies. Michael Mandel, co-author and PPI’s chief economist explained:

A European Internet might sound like a grand, patriotic idea…But were it to take shape, it would harm few people or places more than Europe and Europeans themselves.

You can find the full article on Bloomberg’s website, here.

FCC’s Wheeler Plays Hand Courts Dealt Him

FCC Chairman Tom Wheeler’s determination that he can allow Internet Service Providers to offer differentiated service options to websites and content providers – an ability that “net neutrality” advocates regard as decidedly non-neutral – surprised many people.  But perhaps it shouldn’t have.

Wheeler’s announcement resolved a mystery created by a recent court decision that the FCC lacked the power to regulate the way broadband providers manage their networks.  Specifically, in a case brought by Verizon, the Court denied Wheeler and the FCC authority to specify that there must be only one tier of service on the Internet, the essence of the neutrality program.  But the Court also recognized his authority to regulate broadband as part of the FCC’s larger obligation to promote the Internet.

Predicting that it was time for Wheeler to lead the FCC past the neutrality debate and modernize the regulation of the Internet was not necessarily an act of clairvoyance – it was simply the product of a level-headed reading of the situation.  I participated in a Progressive Policy Institute forum last month in which a variety of experts, including some advocates of net neutrality, came to a surprising degree of consensus about Chairman Wheeler’s response to the U.S. District Court’s decision. Basically, we thought he had three options for regulating the Internet, and two of them weren’t going to work.

The first, and most radical, would be to declare that the Internet was really “just a telephone network” and therefore subject to the most intrusive regulations the FCC can muster.  That would have been a radical step from several perspectives.  First, and most obviously, saying that the Internet is really “just like” the Ma Bell phone system is like saying a Maserati is “just like” a Model T and should be subject to the same speed limits. But it should also be recalled (particularly by those who think the Internet should be a state-owned “public utility”) that the FCC’s regulation of phones was premised on a sanctioned monopoly in which companies invested without significant risk.  In contrast, the modern Internet was built by over a trillion at-risk, private dollars pouring into competing technological platforms.  On these and a variety of other bases, “reclassifying” Internet as telephony would have a very hard time passing the laugh test in court.

(Nor, in fact, might that resolve the problem – read the original 1934 Telecommunications act and you’ll be surprised  to see that it’s quite comfortable with differentiated services, so long as they’re made available to all.  Which is, of course, exactly Wheeler’s position eighty years later.)

A second option was to go to the Congress for explicit legislative authority to regulate conduct on the Internet.   I’m not a professional political analyst but…good luck with that.

To be fair, there may be an emerging middle ground in the Congress for an Internet policy perspective that might not be far from where Wheeler is today; in the past few weeks, for example, over 70 House Democrats signed a letter calling for open and unrestricted spectrum auctions, a sharp departure from the view held by some of their colleagues that the winners of those auctions should be prejudged by the FCC.  That’s a vote of confidence in competition.   But some in Congress advocate not just for net neutrality, but for extended public ownership and control of the Internet, while many on the other side doubt that we need any regulatory protections whatsoever, let alone an effort to extend the Internet’s role in such areas as health and education, or addressing the “digital divide.”  So there’s no obvious consensus on any issues of Internet regulation, let alone imposing neutrality through regulation.

Which leaves Wheeler with a third option – to play the hand the Court dealt him.  And he appears to be doing so smartly, by allowing ISPs to offer websites and content providers (often called “edge providers”) prioritization for those services that want it (perhaps high-definition video conferencing or real-time, interactive services such as health, teaching, or gaming and entertainment) while letting the rest of Internet traffic – your e-mail sharing a video of a cat playing the xylophone – to move as it always has, unabated.  He also made it clear that allowing some content to move on “express lane” terms is not the same as blocking other content, and that he would reserve the right to make sure that any prioritization deals were “commercially reasonable.”  Hopefully, this will mean a case-by-case review of actual transactions that have inflicted actual harm on an actual someone, not making judgments that reflect nothing more than the sensibilities of bureaucrats.  In fact, the PPI panel was also in broad agreement on this point – that it was time to embrace a new regulatory perspective that allowed “experimentation” in the way service is provided and that adjudicated contentious issues after the fact and after demonstrated harm has occurred, rather than through blanket, a priori, regulatory pre-emptions.

Wheeler seems to have embraced this approach. He’s getting us off “Square Zero” by recognizing that tiered service has its place, and putting to rest the neutrality debate that my colleague Hal Singer said last month “is sucking all the oxygen out of the room.”  In that sense, Wheeler’s most important accomplishment in announcing his view might be to make clear that opponents have mischaracterized “prioritization” as being the same as “blocking competing content,” “permitted innovation,” threatening the “open Internet,” and other slogans.

These catchphrases are commonly accepted by many media outlets, but now have been put to shame, and hopefully, rest.  Prioritization doesn’t change the reality that everyone who wants to bring content to the Internet can do so without impediment; in fact, the ISPs desperately want them to do so, since that’s the value proposition of what they’re selling.  Making that clear only ratifies what the market has already decided.  Nor does it mean that the ISPs will decide who can innovate and who can’t any more than the post office decides who can send a letter and who can’t when it offers First Class Mail and then Priority Express.  Wheeler has, to his credit, made clear what the real issues are.

And he appears to be disregarding the complaint that prioritization would be unfair to “the little guy.”  If that were the standard, every sector of the economy would come under regulation.  The little guy has to pony up to put his product on supermarket shelves or to buy a $5 million Super Bowl spot.  The Internet will remain a more competitive sector than virtually any other in the economy.  In fact, the Internet is already tilted against the small, start-up website; Big Websites already have speed advantages over the “little guy” due to pervasive caching of content.  Prioritization may make it easier for the “little guy” to catch up.

Let me make two predictions.  First, “prioritization” will change the Internet less than many think.  Network speeds in the US are increasing rapidly, and we have gone from 22nd in the world to 8th in a very short time (once the courts removed regulatory impediments to sustained investment).  And, we are one of the few nations on Earth that have competing platforms bringing broadband to the consumer – phone companies, cable companies, wireless (where we lead the world), and satellite, as opposed to the nations that staked their bets on a national phone company and are coming to regret it.  So our prospects for leadership are excellent.  I’m not sure how many sites will jump at the chance to improve their stream given how good the system as a whole is becoming.

And the second prediction is that Wheeler has now broken the ice and will lead the FCC into a series of decisions in which a ‘sensible center” finally holds sway. This would include accelerated auctions of spectrum now held by the government and broadcasters, open auctions for new spectrum, allowing the market for “peering” and other backbone transactions to evolve as any other competitive market would, and – one hopes – a revitalized National Broadband Plan to realize the Internet’s social potential.  In all of these cases, the FCC Chairman can reproduce the successful strategy he employed to move the “neutrality” debate forward – seizing the only realistic option in front of him and running with it.

Everett Ehrlich is the president of ESC Company, and a senior fellow at the Progressive Policy Institute.