PPI President Joins Bipartisan Group of U.S. Representatives to Unveil Regulatory Improvement Commission Proposal

WASHINGTON—Progressive Policy Institute (PPI) President Will Marshall today joined Representatives Patrick Murphy (D-Fla.), Mick Mulvaney (R-S.C.) and a bipartisan group of House members to unveil major regulatory reform legislation based on a proposal by PPI to tackle regulatory accumulation, the harmful layering of new federal rules atop old rules year after year.

The Regulatory Improvement Act of 2014 (H.R. 4646) would establish an independent advisory body authorized by Congress—the Regulatory Improvement Commission (RIC)—to review, remove or improve existing outdated, duplicative or inefficient regulations as submitted by the public. The legislation is identical to a Senate companion bill (S. 1390) introduced by Senators Angus King (I-Maine) and Roy Blunt (R-Mo.).

“Regulatory overload is suffocating economic growth and stifling innovation in the United States,” said Michael Mandel, PPI Chief Economic Strategist. “Regulations are essential for a well-functioning economy, but the federal government needs a systematic mechanism for improving or removing regulations that have outlived their usefulness. The RIC would effectively ‘scrape the barnacles off the bottom of the boat’ and allow our nation’s businesses to move forward on innovating and hiring workers.”

Originally conceived by PPI economists Michael Mandel and Diana Carew, the RIC is modeled after the highly successful military base-closing commission. It would consist of nine members appointed by Congressional leadership and the President to consider a single sector or area of regulations and report regulations in need or improvement, consolidation, or repeal.

Both Houses of Congress would then consider the Commission’s report under expedited legislative procedures, which allow relevant Congressional Committees to review the Commission’s report but not amend the recommendations. The bill would then be placed on the calendar of each chamber for a straight up-or-down vote.

Following its report, the RIC would be dissolved and must be re-authorized each time Congress would like to repeat this process to avoid the creation of a new government bureaucracy.

###

Forbes: How to Ease the Crushing Costs of Federal Regulation

Michael Mandel was a guest speaker on Bill Frezza’s RealClear Radio Hour to discuss his proposal for a Regulatory Improvement Commission (RIC).   Frezza’s interview and article dealt with the continuing costs of regulation drag on the U.S. economy and PPI’s proposal for a politically viable reform option.  Mandel’s radio interview was later quoted in an article Frezza wrote for Forbes.

“The cost benefit approach to fixing the regulatory problem is not going to work,” says Michael Mandel, Chief economic strategist at the Progressive Policy Institute, my second guest on this week’s RealClear Radio Hour. He is especially concerned about the way regulatory accumulation impedes innovation. “History tells us that innovation will allow us to deal with a lot of our concerns about the future of the earth in a different way. They are real concerns, but the path forward is more innovation rather than less.”

Mandel goes on to explain the accumulation of regulations over time, the difficulty of keeping up with technological advancements, and how RIC would address these challenges.

Listen to the full interview on RealClear‘s website here and read the full article on Forbes website here.

FCC’s Wheeler Plays Hand Courts Dealt Him

FCC Chairman Tom Wheeler’s determination that he can allow Internet Service Providers to offer differentiated service options to websites and content providers – an ability that “net neutrality” advocates regard as decidedly non-neutral – surprised many people.  But perhaps it shouldn’t have.

Wheeler’s announcement resolved a mystery created by a recent court decision that the FCC lacked the power to regulate the way broadband providers manage their networks.  Specifically, in a case brought by Verizon, the Court denied Wheeler and the FCC authority to specify that there must be only one tier of service on the Internet, the essence of the neutrality program.  But the Court also recognized his authority to regulate broadband as part of the FCC’s larger obligation to promote the Internet.

Predicting that it was time for Wheeler to lead the FCC past the neutrality debate and modernize the regulation of the Internet was not necessarily an act of clairvoyance – it was simply the product of a level-headed reading of the situation.  I participated in a Progressive Policy Institute forum last month in which a variety of experts, including some advocates of net neutrality, came to a surprising degree of consensus about Chairman Wheeler’s response to the U.S. District Court’s decision. Basically, we thought he had three options for regulating the Internet, and two of them weren’t going to work.

The first, and most radical, would be to declare that the Internet was really “just a telephone network” and therefore subject to the most intrusive regulations the FCC can muster.  That would have been a radical step from several perspectives.  First, and most obviously, saying that the Internet is really “just like” the Ma Bell phone system is like saying a Maserati is “just like” a Model T and should be subject to the same speed limits. But it should also be recalled (particularly by those who think the Internet should be a state-owned “public utility”) that the FCC’s regulation of phones was premised on a sanctioned monopoly in which companies invested without significant risk.  In contrast, the modern Internet was built by over a trillion at-risk, private dollars pouring into competing technological platforms.  On these and a variety of other bases, “reclassifying” Internet as telephony would have a very hard time passing the laugh test in court.

(Nor, in fact, might that resolve the problem – read the original 1934 Telecommunications act and you’ll be surprised  to see that it’s quite comfortable with differentiated services, so long as they’re made available to all.  Which is, of course, exactly Wheeler’s position eighty years later.)

A second option was to go to the Congress for explicit legislative authority to regulate conduct on the Internet.   I’m not a professional political analyst but…good luck with that.

To be fair, there may be an emerging middle ground in the Congress for an Internet policy perspective that might not be far from where Wheeler is today; in the past few weeks, for example, over 70 House Democrats signed a letter calling for open and unrestricted spectrum auctions, a sharp departure from the view held by some of their colleagues that the winners of those auctions should be prejudged by the FCC.  That’s a vote of confidence in competition.   But some in Congress advocate not just for net neutrality, but for extended public ownership and control of the Internet, while many on the other side doubt that we need any regulatory protections whatsoever, let alone an effort to extend the Internet’s role in such areas as health and education, or addressing the “digital divide.”  So there’s no obvious consensus on any issues of Internet regulation, let alone imposing neutrality through regulation.

Which leaves Wheeler with a third option – to play the hand the Court dealt him.  And he appears to be doing so smartly, by allowing ISPs to offer websites and content providers (often called “edge providers”) prioritization for those services that want it (perhaps high-definition video conferencing or real-time, interactive services such as health, teaching, or gaming and entertainment) while letting the rest of Internet traffic – your e-mail sharing a video of a cat playing the xylophone – to move as it always has, unabated.  He also made it clear that allowing some content to move on “express lane” terms is not the same as blocking other content, and that he would reserve the right to make sure that any prioritization deals were “commercially reasonable.”  Hopefully, this will mean a case-by-case review of actual transactions that have inflicted actual harm on an actual someone, not making judgments that reflect nothing more than the sensibilities of bureaucrats.  In fact, the PPI panel was also in broad agreement on this point – that it was time to embrace a new regulatory perspective that allowed “experimentation” in the way service is provided and that adjudicated contentious issues after the fact and after demonstrated harm has occurred, rather than through blanket, a priori, regulatory pre-emptions.

Wheeler seems to have embraced this approach. He’s getting us off “Square Zero” by recognizing that tiered service has its place, and putting to rest the neutrality debate that my colleague Hal Singer said last month “is sucking all the oxygen out of the room.”  In that sense, Wheeler’s most important accomplishment in announcing his view might be to make clear that opponents have mischaracterized “prioritization” as being the same as “blocking competing content,” “permitted innovation,” threatening the “open Internet,” and other slogans.

These catchphrases are commonly accepted by many media outlets, but now have been put to shame, and hopefully, rest.  Prioritization doesn’t change the reality that everyone who wants to bring content to the Internet can do so without impediment; in fact, the ISPs desperately want them to do so, since that’s the value proposition of what they’re selling.  Making that clear only ratifies what the market has already decided.  Nor does it mean that the ISPs will decide who can innovate and who can’t any more than the post office decides who can send a letter and who can’t when it offers First Class Mail and then Priority Express.  Wheeler has, to his credit, made clear what the real issues are.

And he appears to be disregarding the complaint that prioritization would be unfair to “the little guy.”  If that were the standard, every sector of the economy would come under regulation.  The little guy has to pony up to put his product on supermarket shelves or to buy a $5 million Super Bowl spot.  The Internet will remain a more competitive sector than virtually any other in the economy.  In fact, the Internet is already tilted against the small, start-up website; Big Websites already have speed advantages over the “little guy” due to pervasive caching of content.  Prioritization may make it easier for the “little guy” to catch up.

Let me make two predictions.  First, “prioritization” will change the Internet less than many think.  Network speeds in the US are increasing rapidly, and we have gone from 22nd in the world to 8th in a very short time (once the courts removed regulatory impediments to sustained investment).  And, we are one of the few nations on Earth that have competing platforms bringing broadband to the consumer – phone companies, cable companies, wireless (where we lead the world), and satellite, as opposed to the nations that staked their bets on a national phone company and are coming to regret it.  So our prospects for leadership are excellent.  I’m not sure how many sites will jump at the chance to improve their stream given how good the system as a whole is becoming.

And the second prediction is that Wheeler has now broken the ice and will lead the FCC into a series of decisions in which a ‘sensible center” finally holds sway. This would include accelerated auctions of spectrum now held by the government and broadcasters, open auctions for new spectrum, allowing the market for “peering” and other backbone transactions to evolve as any other competitive market would, and – one hopes – a revitalized National Broadband Plan to realize the Internet’s social potential.  In all of these cases, the FCC Chairman can reproduce the successful strategy he employed to move the “neutrality” debate forward – seizing the only realistic option in front of him and running with it.

Everett Ehrlich is the president of ESC Company, and a senior fellow at the Progressive Policy Institute.

Uncluttering State Tax Systems

Over the last week, as you’ve raced to file your taxes by the deadline today, you’ve no doubt been bombarded on talk radio, cable TV, and the opinion pages about how complex and anti-growth the federal income tax system has become. Tax reform is indeed long overdue, but it’s not just the federal code that needs fixing: Many state tax systems are regressive, economically distorting, and mind-numbingly complex.

This month, the Progressive Policy Institute unveiled a unique study ranking the tax systems of all 50 states plus the District of Columbia — the State Tax Complexity Index. The index measures complexity in terms of the number of loopholes lurking in the code. What we discovered surprised us.

First, it doesn’t matter whether states rely on income or sales taxes, or whether they have a single rate or multiple rates — all of these systems can be honeycombed with complicated tax breaks, despite what you may have heard from advocates of a national sales tax or “flat tax.” For example, Hawaii and California, two states with very progressive income-tax systems (Hawaii has more marginal rates than the federal code) ranked among the least complex tax systems in terms of special tax preferences. Meanwhile, states with no individual income tax ranged all over the spectrum; for example, Washington ranked near the top of our complexity scale, Texas finished in the middle, and Alaska was toward the bottom. And states that have a flat tax clustered in the middle of our survey, with the exception of Utah, which tied for 37th.

Second, reducing complexity by eliminating tax breaks can finance lower tax rates and also increase progressivity, because such preferences mostly benefit higher-income individuals and businesses.

Choosing how to measure tax complexity across all types of tax systems was a challenge. The only feature that all systems shared was tax expenditures — tax provisions that provide a targeted benefit to specific individuals and groups, and thereby reduce government revenue. Common tax expenditures include deductions, credits, exclusions, deferrals, and rebates.

Some progressive analysts view tax expenditures as an indirect and more politically palatable form of government spending that obviates the need for new programs and administrative bureaucracies. Conservatives usually see them as a way of chipping away at tax burdens on affluent families and businesses. Either way, the growth of tax expenditures greatly increases tax complexity, because they spawn a special set of regulations that multiply over time and often lead to growing inconsistencies and inequities.

How do we know tax expenditures add to complexity? According to the IRS, the average person filing a 1040 form (which includes those taxpayers who chose to itemize their deductions) devotes 16 hours, the equivalent of two full work days, to the task. The 1040EZ form (which limits the number of deductions, credits, and other tax expenditures), by contrast, takes just four hours.

Tax expenditures don’t just clutter up the tax code; they also leak revenues and usually bestow their benefits upon the least needy among us. Federal tax expenditures cost the government over $1 trillion a year. Because you have to itemize to take advantage of deductions and credits, and because the value of deductions is tied to one’s tax bracket, upscale taxpayers reap the lion’s share of the benefits, whether we’re talking about deductions for charity, for home mortgages, or for health care. One big exception to this general rule is the Earned Income Tax Credit, which is specifically targeted to minimum- and low-wage workers as an incentive and reward for work.

Whether at the state or federal level, the lesson is clear: If simplicity is your goal, you have to reduce the number of tax breaks. Switching to a flat or sales tax isn’t the answer. Closing loopholes will also help governments pay their bills the old-fashioned way, by raising revenue instead of piling up public debt. Plugging revenue leaks will ease pressure for raising tax rates, which should be kept as low as possible. And eliminating tax breaks will reduce economic distortions and help channel capital investment to its most productive uses, rather than those favored by politicians.

That’s just as true on the state level as it is in Washington, D.C. So if federal lawmakers ever do get around to serious tax reform, they should invite the nation’s governors to the table, too.

This op-ed was originally published by Real Clear Politics, read it on their website here.

Bringing U.S. Energy Policy Into the 21st Century

U.S. lawmakers don’t drive around in 1970s-era cars, yet they don’t seem to mind energy policies that are equally out of date. Attempts to export shale oil and gas, for example, have run smack into legal and regulatory barriers as old as a Gran Torino.

Energy companies have been urging Congress to lift the lid on exports and start treating oil and gas again like any other commodity that’s freely traded in world markets. Tapping global demand for U.S. shale oil and gas, they say, will spur domestic production and create even more jobs in a sector that’s already racked up robust employment gains.

Russia’s naked power play in wresting Crimea from Ukraine has given fresh impetus to the export push.

From outraged Republicans to eastern Europeans living anxiously in Moscow’s shadow come calls to use America’s shale windfall to wean Europe off dependence on Russian gas, oil and coal.

The idea that surging U.S. gas and oil production is a new source of geopolitical power is a seductive one, though there are practical difficulties inherent in using energy as an instrument of foreign policy.

Vladimir Putin’s Russia is not as scary as the Soviet Union, but it remains an energy superpower. Moscow supplies Europe on average with roughly a third of its energy; many Baltic and central European countries rely almost completely on Russian gas, oil and coal. Some observers think such realities have muted Europe’s reaction to Putin’s aggression.

Taking market share from Moscow would diminish its political leverage, while also weakening its petro-centric economy. Energy accounts for as much as a quarter of Russia’s GDP, 60 percent of its exports, and the lion’s share of its revenues. The problem, of course, is that Washington doesn’t export oil and gas, companies do. They go where the profits are, not where geopolitics dictates.

In any event, U.S. gas and oil exports are stalled by old laws and rules as well as potent domestic opposition. For example, the 1975 Energy Policy and Conservation Act bars most exports of U.S. crude oil. Exporting natural gas isn’t illegal, but it requires getting the U.S. Department of Energy’s approval to build terminals for liquefying the gas so it can be shipped overseas. Amid industry complaints that the Department of Energy is slow-walking approvals, Congress recently held hearings on ways to expedite LNG export licenses.

America’s import-oriented energy policies are a legacy of the 1970s energy crises. They are predicated on an assumption of fossil fuel scarcity and U.S. vulnerability to volatile global oil markets. Today’s reality is abundance, thanks to horizontal drilling techniques and shale fracturing, aka, “fracking.” Next year, the United States is expected to overtake Saudi Arabia as the world’s largest oil producer.

The energy world has been turned on its head, but U.S. policies haven’t changed. Powerful interests are invested in preserving the status quo. Chemical companies, which use natural gas as a feed stock, say ramping up exports would raise domestic gas prices and thereby threaten a revival in U.S. manufacturing. Some analysts say we’d be better off using more natural gas in the transportation sector, for cars as well as heavy-duty trucks, because this would cut both carbon emissions and oil imports.

The fiercest opposition to exporting oil and gas comes from environmental activists. In an open letter (PDF) to President Obama, a coalition of environmental groups led by anti-XL Pipeline crusader Bill McKibben, slammed the administration’s plans for building LNG terminals along U.S. coastlines. “We believe that the implementation of a massive LNG export plan would lock in place infrastructure and economic dynamics that will make it almost impossible for the world to avoid catastrophic climate change,” the letter asserts. Most of the nation’s fossil fuel reserves, it adds, should stay “in the ground.”

It’s highly unlikely, though, that the public will support attempts to stuff the shale genie back in its bottle. According to the U.S. Energy Information Administration, jobs in the oil and natural gas industry grew by 32 percent between 2007 and 2012, even as overall employment fell 11.4 percent. The glut of cheap gas is also a boon to energy-intensive industries in the United States, which are beginning to attract significant investment from Europe, where energy costs are much higher.

Moreover, it’s not a foregone conclusion that taking advantage of America’s shale bonanza will bring on an environmental catastrophe. On the contrary, fuel switching in the electricity sector from coal to natural gas already has brought a 10 percent decline in U.S. greenhouse gas emissions, according to the Environmental Protection Agency. If gas catches on as a transport fuel, that also would yield lower emissions. In any event, fossil fuels will continue to be a major part of America’s fuel mix for decades to come, green fantasies notwithstanding, and lawmakers must manage the nation’s energy portfolio — including zero-carbon-emitting nuclear energy—in a way that both spurs economic growth and reduces the risks of global warming.

In truth, no one really knows what will happen if America once again becomes a major energy exporter. We can’t say for sure whether domestic prices will spike, or how global markets would react to an influx of U.S. oil, or what the net effect on global carbon emissions would be. Nor is it certain that exports by energy companies would buttress U.S. diplomacy. The sensible course is to experiment—to lift restrictions on oil and gas exports at a measured pace, measure economic and environmental impacts, and make adjustments as we go. That should be part of a political bargain in which Democrats agree to ease export controls in return for GOP support for more public investment in research and development of renewable fuels and clean technology.

What makes no sense is to let the dead hand of 40-year-old energy policies constrain America’s freedom of action today. As the shale revolution approaches its 10th anniversary, it’s time to bring U.S. energy policy into the 21st century.

This piece was originally published at the Daily Beast.

Thanks To Bill Clinton, We Don’t Regulate The Internet Like A Public Utility

A DC federal court struck down the FCC’s “net neutrality” regulations earlier this year, but did nothing to resolve an ongoing debate over whether or how the government should regulate the Internet.  At the heart of the controversy lies a central question – should we regulate the Internet as we did the old telephone network and other so-called “common carrier”?

In a paper to be released this week by the Progressive Policy Institute, I examine the past two decades’ experience to shed light on this question.  And the answer that keeps coming up is that proposals for strict utility-style regulation of the Internet have two things in common.  First, they are based on the presence of a “natural monopoly” for broadband that simply does not exist.  And second, where they have been tried, utility-style rules have been the greatest single obstacle to investment in broadband infrastructure.

From the earliest days of the Bell monopoly, our telephone system was built around an explicit bargain.  In exchange for a guaranteed and low-risk profit, the Bell system would provide quality, reliable phone service to the nation.  This bargain was deemed necessary because it was assumed that phone service was a “natural monopoly” where the costs of infrastructure were so high that competition wasn’t possible.  But by the 1990s, those assumptions had completely broken down.  Microwaves and coaxial cable could carry phone calls, phone lines could deliver video, and an “information superhighway” loomed in the future.

The Clinton Administration’s Telecommunications Act of 1996 sorted this mess out and launched the age of modern Internet policy – trusting market forces and technological innovation to the maximum extent.  It was an act of incredible political maturity.  Its authors knew something remarkable was about to happen and that government could best serve it by stepping back and letting private investment happen.

So the 1996 Act drew a line – the old phone system would remain regulated as a “common carrier,” but the emerging new world of “information services” would be allowed to develop on its own free from utility-style requirements such as government oversight of prices, forced sharing of infrastructure with competitors, or rigid traffic management rules.  As a result, we have seen over $1.2 trillion in investment since the 1996 Act, and the innovation, growth and new services the Act’s framers imagined.

Further light is shed by the treatment of the incumbent phone companies.  As a transitional measure, the Act preserved the utility model for the telcos, which were forced to share any infrastructure they built with all comers at a government supervised price (well below its long-term cost).  That requirement smothered investment since no one would build new infrastructure if they had to share it with competitors at a loss.  The result was initial stagnation in DSL broadband.  And when that requirement was later –overturned, investment followed there as well – more evidence of the dangers of the utility model in this space.

Europe still relies on these utility-style regulations and has used its state post and phone monopolies to build out broadband.  The results haven’t been pretty.  Per capital investment in broadband in the U.S. is nearly double that of Europe.

As a result, our major European trading partners are anchored near the bottom of the Internet speed charts – Germany is 27th in the world on the most recent Akamai speed rankings, France is 34th, Italy 48th.  The US by contrast is 8th, trailing small, dense, and highly urbanized places like Japan, South Korea, and Hong Kong, in contrast to the U.S.’ sprawling geography.  No one wonder EU Digital Policy chief Neelie Kroes says Europe “needs to catch up” in broadband.

The “natural monopoly” pro-regulation arguments depend on clearly does not exist.  America now has three different broadband technologies fully deployed and competing for customers (cable, telco, and 4G wireless).  The U.S. is near the top of global rankings in both high-end service, with 85 percent of households served by networks capable of 100 mpbs or more and the most affordable entry-level wired broadband of any nation in the OECD.  Imagine what would ensue if we were to change course and regulate the Internet as a monopoly utility?  Which of the three technologies would regulators adopt?  How would we ensure continued investment?

The Internet is undeniably incredibly important.  But that importance doesn’t mean that we should treat it as a public utility.  Bringing back the days of Ma Bell won’t fulfill broadband’s remarkable promise.

This article was originally posted by Forbes.  You can read the original post on their website here.

Sens. Johnson, Crapo On The Right Track to Housing Reform

The housing sector is one of the pillars of the U.S. economy. That’s why we have marveled at the many partisan and radical proposals to reform the federal housing finance system that would have trashed both what’s good and what’s bad with the current system. PPI continues to maintain that any reform proposal must stabilize U.S. housing markets, reduce the government’s over-sized footprint in housing finance and protect taxpayers from a repeat of the housing bailout.

While the full details aren’t yet available, a bipartisan proposal from Senators Tim Johnson (D-South Dakota) and John Crapo (R-Idaho) seems to move the housing debate out of the ideological realm and closer to reality. Their blueprint ensures the continued availability to homebuyers of long-term, fixed-rate mortgages, and proposes creation of a fee-based insurance fund, similar to the Federal Deposit Insurance, to shield taxpayers from having to bailout the housing finance sector in the future.

There are still many details in question, but we think Senators Johnson and Crapo have pointed the housing debate in a more promising direction.

How Season 2 of House of Cards Murders the 25th Amendment

As the nation binges on Season 2 of “House of Cards,” we have witnessed ruthless House Majority Whip Frank Underwood (Kevin Spacey) maneuver for the vice president to resign and for himself to be appointed to the position. It’s no spoiler to say that Underwood clearly won’t be content to remain a heartbeat away from the presidency.

But the biggest casualty of “House of Cards” might well turn out to be the 25th Amendment, which governs vice presidential succession. Once again, the amendment has, at least in popular fiction, been transformed from a pragmatic constitutional provision into a Machiavellian route to power. And that’s a shame, because in a real-world time of crisis it could be incredibly valuable — but only if an ongoing stream of fictional portrayals hasn’t distorted its public image beyond recognition.

The 25th Amendment was enacted in 1967 during the height of the Cold War, at time of hair-trigger tensions and the ever-present reality of nuclear missiles mounted on fast-flying intercontinental ballistic missiles. The need for near-instantaneous decision making and continuity of the command authority during the Cold War was clear, yet the nation had twice found itself with a vacancy in the vice presidency, for nearly 4 years after Truman succeeded to the top job 1945 and again for over a year after the Kennedy assassination.

This problem was rooted in an oversight of the founders. They had crafted a vice presidency to assume executive authority in the case of the death, resignation, or removal of a president, but had not provided a way to fill the ensuing vice presidential vacancy before the next regularly scheduled general election. As a result, between 1789 and 1967, through a combination of presidential and vice presidential deaths and resignations, the VP slot had been vacant 16 times for some 40 years in total, or nearly 20 percent of American history.

Depending upon the Law of Presidential Succession at the time, the secretary of state or the speaker of the House were bumped up to next in line to the Oval Office. But the former lacked democratic validation, while the latter was not part of the sitting administration, and might even be its vehement political enemy. The 25th Amendment was thus enacted to enable the president to fill a vacancy in the vice president position, subject only to confirmation by simple majorities of both houses of Congress.

On “House of Cards,” Underwood’s machinations have arranged for the unhappy sitting Vice President to step down and for him to be named in his stead, with various types of murder and mayhem enacted along the way. This is a far cry, however, from how the mild-mannered and upstanding Gerald Ford actually found his way to the Oval Office via the 25th Amendment in 1974.

After VP Spiro Agnew was pressured to resign due to bribery charges, Nixon looked around for a harmless placeholder until the 1976 election. Neither he nor Ford, imagined that Nixon would himself likewise be forced from office, following what Ford termed the “long national nightmare” of the Watergate crisis. A longtime member of the House, Ford had never been elected by any constituency larger than the area around Grand Rapids, Michigan, yet he assumed full executive authority, more in sadness than in triumph. Now though, thanks to “House of Cards,” what was then a constitutional lifeline is now best known as a vaguely illegitimate back route to power.

And “House of Cards” is not the only fictional outlet to rough up the 25th Amendment, which includes two other provisions addressing another oversight of the founders: what to do about a president who was not dead but was severely incapacitated. Section 3 of the amendment allows for the president to designate the VP to temporarily become acting president. This can be initiated by a still-conscious president, as has been done three times by presidents who were anesthetized for brief medical procedures. Its intention was for such short-term situations or, even more so, for protracted periods such as following Woodrow Wilson’s stroke in 1919 or Dwight Eisenhower’s heart attack in 1955.

Predictably, though, television took this sober precaution to an outlandish level on the long-running series “The West Wing,” in which the daughter of Democratic President Jed Bartlet (Martin Sheen) was at one point kidnapped by terrorists. This created an insoluble conflict-of-interest for the president, leading him to invoke Section 3 of the 25th Amendment and to temporarily to step aside. Conveniently, the vice president on the series had also recently resigned. This enabled the next in line, a boorish Republican speaker of the House (John Goodman), to briefly become commander-in-chief and thus to play havoc with the administration’s policies.

Most controversially, Section 4 of the 25th Amendment enables vice presidents themselves to initiate the power transfer, provided that they have the counter-signatures of a majority of cabinet officials. The single time this provision unambiguously should have been enacted was in 1981 after an assassination attempt left Ronald Reagan unconscious. But a mere three months into their term, Vice President George H.W. Bush was determined to create even a hint of an unseemly power grab, and never invoked the amendment despite Reagan’s manifest incapacitation. One of the indelible images of that day was mass confusion at the White House and the infamous, and erroneous, declaration by Secretary of State Alexander Haig that he was in charge while Bush was traveling back to DC.

Of course, popular culture has likewise since latched onto this scenario, most famously in the Hollywood movie “Air Force One,” in which the president’s plane has been hijacked with him aboard. Although this was hardly the circumstances originally envisioned for the amendment, it undoubtedly applied — the president (Harrison Ford) could hardly have been more incapacitated than while evading terrorists at 35,000 feet.  But the vice president (Glenn Close) stalwartly refuses to make the correct choice to temporarily transfer power to herself, despite urging from a cabinet more sensible than herself.

So, what do these fictional scenarios have in common with reality? Not very much. But they do play to an enduring fascination with convoluted Shakespearean scheming to seize the throne, such as in “Macbeth” and “Richard III.” Hopefully, the 25th Amendment will never need to be invoked in such dramatic circumstances. But, the reality is that it conceivablycould be — and in a moment of crisis and confusion, the perception of order and legitimacy may count for a lot. It certainly won’t help if the public’s most enduring impression of the mechanism  for orderly succession involves the scheming of Frank Underwood.

This piece was originally published in The Daily Beast, you can read it on their website here.

 

Protecting the Environment for Innovation: A Regulatory Improvement Commission

A Regulatory Improvement Commission would solve the issue of regulatory accumulation, the layering on year after year, of new rules atop old ones. The fundamental problem is not that government keeps creating new rules, but that it never rescinds old ones. As a result, U.S. businesses and entrepreneurs are enmeshed in an ever-growing web of complex rules that are sometimes duplicative, sometimes in conflict with each other, and sometimes obsolete. Like barnacles on a ship’s hull, the sheer number and weight of regulations imposes a drag on economic growth. Regulatory accumulation also raises the costs of entry to entrepreneurs, and creates big opportunity costs as the time and attention of business managers is consumed by compliance with rules rather than creating new products or better business processes.

This problem demands an institutional response. It is unrealistic to expect the same agencies that promulgated rules to eliminate or modify them. Some new entity must be created and charged exclusively with pruning old rules that inhibit innovation and entrepreneurship. PPI has proposed creation of a Regulatory Improvement Commission (RIC) to fill this vacuum. It is modeled on the Defense Base Realignment and Closure Commission (BRAC), which Congress created in 1990 to create a politically feasible way to reduce excess military infrastructure.

The RIC would consult experts, business and the public to draw up a list of regulations that should be eliminated or improved, and present it to Congress for an up or down vote. As a small body that convenes occasionally, and relies largely on staff loaned to it by Congressional and Executive Branch offices, its costs would be negligible. The savings — both in terms of retiring costly rules and reducing the drag of regulatory accumulation of economic growth and entrepreneurship — could be enormous.

Like BRAC, the RIC would provide political cover to Members of Congress, who otherwise would have to vote individually on rules often defended by entrenched and politically powerful interests. The process of “getting it done” has already begun: Sens. Angus King and Roy Blount last year introduced bipartisan legislation in the Senate to establish the RIC.

The above remarks were prepared for delivery at the Kauffman Foundation’s 2014 State of Entrepreneurship event on February 12.

For recent PPI work on regulatory reform, see our latest policy memo and op-ed.

Has the FCC Chairman Solved the Net Neutrality Quagmire?

Up until the D.C. Circuit’s recent decision in Verizon v. FCC, extreme voices of the political spectrum dominated the “net neutrality” debate. The far left pressed for extensive government interference in the dealings between broadband providers and websites. And the far right questioned the FCC’s authority and need to regulate Internet services. The D.C. Circuit truncated both sides of the distribution of voices; by rejecting the left’s draconian methods, and by affirming the FCC’s authority and basis to regulate Internet services, the Court paved the way for a reasonable compromise. To satisfy the Court, however, the new regulatory regime must leave “substantial room for individualized bargaining and discrimination in terms” of special-delivery arrangements; else it would amount to an outdated mode of regulation called “common carriage.”

The solution, which Bob HahnBob Litan and I have been peddling for a few years, involves the FCC making case-by-case decisions or “adjudication” in administrative law. In a nutshell, the FCC would permit special-delivery arrangements between broadband providers and websites, but the agency would police abuses of that newfound discretion through a complaint process. Adjudication would ensure consumer protections on the Internet, and it will bolster the incentives of both websites and broadband providers to invest at the edges and the core of the network, respectively, generating even more benefits to consumers.

Fortunately, the two Bobs and I are no longer the sole defenders of adjudication. In the two weeks since the decision, adjudication has been endorsed by Professor Kevin Werbach in the Atlantic, Professor John Blevins in the Washington Post, and Professor Stuart Benjamin in his blog post. Most importantly, the concept was floated by FCC Chairman Tom Wheeler in a recent speech in Silicon Valley, and made even more explicit in his speech at the State of the Net conference this week. From an economic perspective, adjudications are the most efficient and most equitable solution available to the Commission. Continue reading “Has the FCC Chairman Solved the Net Neutrality Quagmire?”

Washington Monthly: What If the US Had a Multiparty System Like Germany’s?

With the U.S. still barely recovered from to al partisan gridlock and political dysfunction, Germany has once again formed a “grand coalition” bringing together the two main center-right and center-left parties, which collectively won more than 70% of the vote in last September’s parliamentary elections. The biggest sticking point? Figuring out the best mechanism for determining the country’s minimum wage.

How do the Germans manage to produce such cooperation and consensus in a system of five parties – and what might politics look like if the U.S. had such a multiparty system? Part of the answer in Germany lies in the intricate construction of its electoral process which, for obvious historical reasons, was designed after World War II to decentralize and disperse power.

Members of the lower house of parliament, the Bundestag, are chosen through a process in which each German citizen has two votes. The first vote, as in the U.S., is cast for an individual person to represent a specific electoral district. The second, and ultimately more influential, vote is cast directly for a political party and determines the overall party composition of government.

Such use of a proportional representation system almost guarantees that Germany will have a multiparty system. But in order to avoid chaotic hyper-fragmentation among parties (as found, for instance, in Italy) Germany enforces a threshold of 5% for a party to enter into the Bundestag. Essentially, any party that fails to gain at least 5% of the national vote is excluded from parliament, a provision that has proven useful in promoting centrism and marginalizing extremes, including both neo-fascist parties and the remnants of the old Communist Party in East Germany.

In all, the German system has tended to yield parliaments with about five parties represented — which is also roughly what the U.S. political system might produce under similar rules.

Consider first the Democrats in the U.S., who have long been a loose coalition between classic “blue collar voters” (who have a strong interest in issues like labor rights and the social safety net) and socially liberal voters (who are focused more on themes of multiculturalism and diversity). Of late, the two branches have been cooperating well. But the old fault lines can still turn up, such as in the debate over immigration, in which one wing is mostly concern about domestic wage competition and the other side places more emphasis on the civil rights of minorities. It’s not hard to imagine the blue-collar Democrats and the socially liberal Democrats forming separate parties under a proportional representation approach.

The Republicans, it’s now evident, are much more fragmented, consisting of a rump of “Establishment Republicans,” a Tea Party cohort maniacally focused on reducing the size of government, and a religious right that prioritizes “traditional values.” Clearly these groups do overlap, as perhaps best illustrated by the fondness of Michele Bachmann both for overturning Obamacare and for heralding the arrival of the Rapture. In a multiparty system, these various branches wings would likely sort into three separate parties – thus totaling five parties across the political spectrum, as is usually the case in Germany.

This year in Germany, the 5% threshold led to this exclusion of the small, free-market oriented Free Democratic Party, which has served as the junior partner since 2009 in the government led by Angela Merkel’s Christian Democratic Union. This forced Merkel to turn to the Social Democrats to reach a governing majority.

Naturally, each major party would prefer to have unilateral control over government, but the decision to form a broad governing coalition between the German center-right and the center-left is hardly unprecedented: the same situation prevailed from 2005 to 2009, a period during which Germany weathered the global economic downturn far better than most countries. Just as the American two-party system has led to sharp polarization, the German multiparty system has pushed them towards greater accommodation.

“Grand coalition” governments are not panaceas. Most notably, they often suffer from an inability to offer more than incremental changes and a tendency to fracture under stress. In the longer run it’s also problematic not to have the government in power checked by forceful opposition from a major party outside government.

Still, such arrangements promote broad consensus and enhance the stability of a political system , given that the governing coalition incorporates parties supported by 7 in 10 voters. The last time the U.S. had anything remotely like such a grand coalition was in the period after 9/11 when leaders from both parties coalesced around President George W. Bush and Democrats made little attempt to use their one-vote Senate majority to obstructionist ends.

The 11th-hour vote on October 18 to reopen the U.S. federal government and avert a catastrophic debt default also offered the faint outline of a centrist governing coalition: the measure passed the Senate by 81-18 and the House by 285-144, with the support of leaders of both parties in both houses and of president. The more recent Ryan-Murray budget deal also offers prospects of reasonable compromise. This makes it all the more intriguing to imagine what American government could accomplish with the four years of the sort of sensible, centrist politics and policy making that seems likely to prevail in Germany thanks (at least in part) to its multiparty system.

This piece was originally published by Washington Monthly, you can read it on their website here.

America’s Digital Policy Pioneers

On Wednesday, we honored Larry Irving, Ambassador Bill Kennard, Ambassador Karen Kornbluh, Ira Magaziner, and Michael Powell as digital policy pioneers at our event “Enabling the Internet: A Conversation with America’s Digital Policy Pioneers.” Each of these individuals made important contributions to that led to the exponential growth and the Internet’s rapid emergence as a tool for communication, information access, global commerce and social networking. PPI brought them together on one stage to continue our ongoing conversation about how government can collaborate with private enterprise to take advantage of technology as a major engine of the U.S. economy.

These leading architects of U.S. digital policy, looked back to the early debates and key decisions over Internet regulation, and forward to the modern challenges of data security and privacy, international governance, the advent of the “Internet of Everything,” and national firewalls abroad. Larry Downes, the panel’s moderator, guided the conversation by asking the panelists to describe the challenges they faced in the first days of the “information superhighway” and extrapolate how those lessons might be applied to the decisions facing policy makers today at home and abroad. A consensus was built around the principles of bipartisanship and the idea that legislation of new technologies should always lead with “do no harm.”


2013 Digital Policy Pioneers: Ira Magaziner, Ambassador Karen Kornbluh, Larry Iriving, Michael Powell and Ambassador Bill Kennard

Our Odd Upper House: The U.S. Senate’s Peculiarities Don’t End With the Filibuster

The filibuster is back in the news, but that’s just one of the peculiarities that make the U.S. Senate perhaps the world’s oddest legislative chamber.

When viewed from an international perspective, three other features — the extraordinary scope of its powers, its drastic misapportionment, and the exceptional weakness of its leadership structures — make the U.S. Senate a true global outlier. Further, each of these features has a significant (and often negative) impact on American democracy, politics, and policymaking.

The Senate is an Exceptionally Powerful Upper House: The Senate shares full legislative, budgeting, and oversight authority with the House; it also has additional powers to confirm executive nominees and to ratify treaties. However, among legislatures in the world’s established democracies, the norm is for upper houses to be decisively weaker than lower houses. Besides Italy, no other member of NATO, the European Union, or the G-8 has an upper house whose power matches that of its lower house.

Indeed, of the 23 countries that have been independent and continuously democratic since 1950, only three besides the U.S. have powerful upper houses. The remainder are either unicameral, and hence have no upper house at all, or apply a version of the so-called “one-and-a-half house” approach. Under this approach, weak upper houses play a role by reviewing legislation, voicing minority opinions, and suggesting amendments. But they rarely initiate major bills and, most importantly, they can be overridden by the lower house in cases of disagreement.

Such constitutional arrangements greatly streamline the legislative process and facilitate the creation of coherent public policy. In contrast, political systems with two equally powerful chambers, such as Italy and much of Latin America, are much more prone to ineffective governance of the kind we’ve been witnessing in Washington D.C.

The Senate is Extraordinarily Malapportioned: If the concept of “one-person, one-vote” is the modern gold standard for the allocation of political power, then the Senate is easily one of the world’s least representative legislative houses. Wyoming’s population of 576,000 is 66 times smaller than California’s 37 million — yet both have two US Senators. Likewise, North Dakotans have 38 times the per capita influence of Texans in the Senate, and Vermonters have 31 times the Senate clout of their neighbors in New York.

When translated into votes on the Senate floor, the nine most populous states represent just over 50 percent of the population but have a mere 18 Senators. The 26 smallest states have a majority of 52 Senators, but include only 18 percent of the national population.

The small-population states have repeatedly benefited from their outsized representation in the Senate by receiving disproportionate funding in such policy areas as food and nutrition, community development, environmental quality, disaster relief, and homeland security. Although some other democracies also have malapportioned upper houses, those upper houses tend to relatively weak and thus their policy impact is much less pronounced.

The Senate is Largely Leaderless: Although the Senate Majority Leader is often equated with the Speaker of the House, by comparison the power of the Senate’s top figure is ambiguous and diffuse. Unlike the Speaker, the Senate Majority Leader has no Rules Committee and few other tools to determine the flow of legislation or to limit the amount of deliberation, debate, and delay on the floor. Senator Robert Dole once opined that he was not “the Majority Leader, but the Majority Pleader.”

The major difference between the houses, of course, is that individual Senators can and do make creative use of that chamber’s expansive rules of debate and amendment — including, yes, the filibuster, which was only partly reformed by last week’s actions by Senate Democrats.

Virtually nowhere else in the world can a single rank-and-file member of a legislature so easily bring the work of the entire legislature to such a halt. And as a consequence, few other legislative chambers have leaders who are so weak relative to the average member, and thus so unable to set coherent goals or to move the institution beyond impasses. Former Senate Majority Leader Trent Lott got it about right when he termed his autobiographyHerding Cats.

So can any of this be meaningfully addressed? Reform of the Senate has been discussed for almost as long as the Senate has been in existence, and change does not come easily. However, last week’s events show that incremental reforms are achievable. The year 2013 also marks the centennial of the 17th Amendment, which ushered in huge changes by mandating the direct election of Senators — and also made it clear that sweeping reform of the Senate is indeed possible.

These debates are sure to continue, but in the meantime it’s valuable for Americans to understand our unusual upper house as it really is, and not as we may assume it to be.

The Huffington Post published this article by Raymond Smith, a PPI senior fellow.  You can find the original article here.

New Yorker: More Freedom on the Airplane, if Nowhere Else

The New Yorker‘s James Surowiecki referenced a study by PPI’s Michael Mandel, chief economic strategist, in an article about the true value of digitally based companies. The author referenced Mandel and others to substantiate the idea that these companies have been traditionally undervalued:

“Another study, by the economist Michael Mandel, contended that the government had underestimated the value of data services (mobile apps and the like) by some three hundred billion dollars a year.”

Read the entire piece in the New Yorker here.

 

21st Century Regulation: One New Approach

We at PPI believe that regulatory reform is an essential part of a high-growth, high-innovation economic strategy.  But regulatory reform is not a secret code word for less regulation. Rather, we are looking for better ways to accomplish essential regulatory goals. That’s why we have long supported the idea of a Regulatory Improvement Commission, a proposal that was picked up and turned into legislation by Senators Angus King and Roy Blunt.

From that perspective, we were very glad to host a distinguished panel of regulatory experts this week on the subject of 21st Century Regulation: Using Technology and Data Analysis to Improve Results;. Brian Bieron of eBay presented PayPal’s proposal for  “developing new regulatory models that better achieve societal goals and also support rapid innovation.”  Brian argued that regulators could and should take advantage of “increasingly ubiquitous 21st Century technology and data-analytics techniques used by technology-enabled organizations” to make faster and better decisions.  This would be an enormous change, since the current regulatory process is biased in the direction of moving slowly. Elaine Kamarck of Brookings and Hester Peirce of Mercatus commented. Discussion was both vigorous and on-point.

A copy of the PayPal paper can be found here. It should be required reading for anyone interested in new ways of making regulation work better, without harming innovation.

 

Student Debt: The FAQs on Pay As You Earn (PAYE)

In August 2013, President Obama announced a major drive to increase enrollment in “Pay As You Earn” (PAYE), a federal student loan repayment option based on income and family size. PAYE was introduced by the administration in 2011 as a temporary relief for struggling borrowers.

With the planned expansion, however, the program is fast turning into a permanent part of higher education funding. PAYE is particularly being targeted to young college graduates, who have been among the worst affected by the Great Recession and slow recovery.

Given PAYE’s increasing role as a policy tool, it’s important we get our FAQs straight on what PAYE is and the potential implications for borrowers, colleges and universities, and taxpayers.

This factsheet addresses some common questions about PAYE, to help inform the discussion surrounding the future of higher education funding.

Read the entire Factsheet on PAYE here.