Labor Boosted by Proposed Merger

America’s embattled labor movement hasn’t had much to celebrate lately, so it’s worth noting when a major union welcomes a business mega-merger.

The Communications Workers of America strongly endorsed AT&T’s proposed $39 billion acquisition of T-Mobile. Deals this big – the merger would create the nation’s largest mobile-phone carrier, with about 39 percent of the market – have to run a bruising, multiple-agency regulatory gauntlet. Some consumer groups worry that it will reduce competition in the lucrative telecommunication sector, dampening incentives for innovation and possibly pushing up consumer prices.

No doubt the deal merits close scrutiny. But having one of America’s largest private unions (700,000 strong) in its corner can’t hurt AT&T’s chances.

C.W.A. represents 42,000 AT&T wireless workers and regards the company as reasonably friendly to unions. The merger gives it a better shot at organizing T-Mobile workers in the U.S. and in Germany (the company is owned by Deutsche Telekom, whose stock zoomed after the announcement.) For those workers, being absorbed into AT&T will mean “better employment security and a management record of full neutrality toward union membership and a bargaining voice,” said C.W.A. president Larry Cohen.

This rare bit of good news for organized labor follows successful efforts by Republican governors in several states to curtail public workers’ right to collective bargaining. Although polls show majorities of Americans are opposed to denying bargaining rights, high profile battles in Wisconsin, Indiana and New Jersey have drawn the public’s attention to the adverse impact on state budgets of generous compensation schemes for state employees, especially pension and health care benefits.

This is a huge problem for organized labor, which in recent decades has experienced growth only in the public sector. The picture is especially dismal in the private sector, where less than eight percent of workers are unionized.

If they are going to reverse their long pattern of decline, U.S. labor unions need to redefine their economic role and relevance to American workers in a post-industrial economy. Cohen’s statement pointed to a mission that would be good for both U.S. workers and employers: building modern infrastructure to underpin America’s ability to win in global markets. “For more than a decade, the United States has continued to drop behind nearly every other developed economy on broadband speed and build out,” he said.

In fact, a big national infrastructure push represents common ground on which big labor and big business can meet. In an “odd couple” pairing last week, AFL-CIO President Rich Trumka and Tom Donahue, head of the U.S. Chamber of Commerce, showed up to endorse a new proposal for a national infrastructure bank. Drafted by a bipartisan group of U.S. Senators including John Kerry, Mark Warner and Kay Baily Hutchinson, the bank would leverage billions of private investments in new transport, energy and water projects.

If labor and business can get behind an ambitious project for “internal national building,” our equally polarized political parties surely should be able to follow their example. And that bodes well for an American economic comeback.

Facebook and Twitter Alone Can’t Sustain Democracy

From Tunisia to Egypt to Libya, as governments in countries continue to teeter and fall, the voice of a new generation bolstered by the internet is opening doors for democracy. But though the celebrations in the town squares of Tahrir and Mohammad Bouazizi are still fresh, brutal crackdowns in Iran, Libya, and Bahrain show how fragile the call for democracy is – and why technology alone can’t sustain democratic revolution.

Over the past month, the world watched as legions of young tech-savvy Twitter and Facebook users banded together in a virtual civil society to create change in their governments. Using a new, free, and open tool such as the internet was a powerful way for the first plugged-in generation in history to demand change. When their respective government bodies attempted to censor the protesters, a world-wide safety web was immediately cast for the photos, videos, and online messages that would mobilize, organize and encourage the citizens. The rules of political organizing had been changed — freedom was literally in the air.

Social media is filling an important vacuum in these revolutions: social media is becoming the fabric of civil society that is otherwise missing from autocratic states. A vibrant civil society with a strong NGO community is the glue that keeps any democracy together. It takes a multitude of organizations, student groups, institutions, and other volunteers to safeguard the fresh, new democracies that are springing up in countries such as Egypt, Liberia, and Ukraine. Without a strong civil society and an independent open economy where citizens feel safe, democracy will fail.

Strong and independent non-government groups support democracy by providing a channel for every citizen to work within to achieve change in policy and to safeguard hard-won freedoms.

Such independent groups provide necessary forums for citizens to moderate conflict, teach democratic principles, and push for political change in a peaceful and legitimate manner.

If the United States wants to help citizens protect their new democracies around the world, we ought to start with the basic foundation of our country – that a government for the people and by the people requires more than Facebook and Twitter. Capacity-building NGOs and volunteer citizens must band together to offer their country a support system during these fragile times.

Americans and like-minded countries such as Poland and the Czech Republic can lend a hand, organizing advocacy groups that can mobilize the NGO community and citizens. After the fall of the Berlin Wall, ‘Freedom Fighters’ traveled around the globe to share their first-hand knowledge with these stakeholders of other emerging countries.

Social media has played an integral part, but for an effective follow up to virtual revolt, an old fashioned civil society is what these fresh new democracies require. Though the door to democracy has opened in some countries, it will take a strong, independent civil society to ensure that it will not be slammed shut once again. As the online world comes face to face with the military might of the entrenched powers that be, there is a need for on-the-ground organized citizen engagement and dialogue.

State of the Union: Obama Gets Innovation Upside-Down

In his State of the Union speech, President Obama spent a lot of time on innovation, regulation, and jobs–that’s good. Unfortunately, in all three cases he got his priorities upside down.

Let’s start with innovation.  I counted how many words the President devoted to different areas of innovation.

  • 2 words for biomedical research, the area where the U.S. is far ahead of the rest of the world.
  • 68 words devoted to extolling the job-creating virtues of space travel and NASA, an agency which currently has no mission unless it gets a lot more money.
  • 113 words for  high-speed-wireless broadband, a worthy goal.
  • 361 words in favor clean energy, a technology where the U.S. has little competitive advantage over the rest of the world.

In other words, Obama spent his time lauding our least competitive areas of innovation, while giving the back of his hand to biomedical research, the area where we have the clear global advantage.

If you think I’m exaggerating, take a look at these two charts.  When it comes to life sciences, the U.S. is way ahead. U.S. companies account for 44% of   R&D spending by life sciences companies around the world in 2010, according to etimates by Battelle/R&DMagazine.  And U.S. government support for health research is unsurpassed, accounting for 70% of  global public sector funding.

On the other hand, the U.S. support for  energy research is mediocre, at best. U.S. companies account for only 25% of global energy R&D spending by businesses.  And in 2008, before Obama took over, the U.S. government funding for energy R&D accounted for only 20% of the global public sector spending on energy R&D.  That’s pitiful.

Here’s what a recent R&DMagazine piece says about U.S. energy R&D:

the level of R&D spending in the U.S. energy sector is small in absolute terms and as a percent of revenue (0.3%) when compared with other sectors. For example, the total amount of private sector investment in all forms of energy research in our portfolio would likely amount to little more than half of the leading life science R&D investor, Merck, or the leading software/IT R&D investor, Microsoft, both of which invested more than $8.4 billion in R&D in 2009.

Mr. President, every time you talk about clean energy creating jobs, you are placing your bet on the wrong horse.  Communications and biosciences are the best bets we have in the near-term.

Now we come to regulation. I’m afraid once again the President started out right, and ended upside-down. He began by explaining how he would get rid of rules that imposed an unnecessary burden (29 words). But then he spends triple the time ( 102 words) defending his administration’s regulatory efforts.  He should have stopped while he was ahead.

Finally, we come to jobs, which were spread through the whole speech. This is my ‘soft’ count of how many times the word ‘jobs’ were mentioned in connection  with various areas of the economy (your count may differ)

  • IT-1
  • Space-1
  • Clean energy –2
  • Education–3
  • Infrastructure –2
  • Exports–4

Exports got the most mentions as a source of jobs—-but no mention of imports, and no mention of the fact that our trade deficit in advanced technology products hit an all-time record in November, going into double digits for the first time.  The reason? Imports of advanced technology products have surged, while exports are basically flat.  Before worrying about exports, we should worry about recapturing some of the jobs lost to imports.

This piece is cross-posted at Mandel on Innovation and Growth

Too Soon to Tell About FCC Rules

I had hoped to write a simple post giving thumbs up or down to yesterday’s FCC ‘net neutrality’ rule-making. Alas, I can’t, yet.

Let me explain. I judge their actions by applying the principle of countercyclical regulatory policy: In recessions, the government should refrain from imposing heavy-handed regulations on innovative, growing sectors. The goal is to keep the communications innovation ecosystem growing and healthy.

From that perspective, the three basic rules that the FCC approved are fine: Transparency, no blocking of legitimate websites, and no “unreasonable discrimination” by wired broadband.

The key here is the transparency provision, which gets little attention. If we look back at the wreckage of the financial boom and bust of the 2000s, the big problem was not financial innovation. Rather, the big mistake made by the financial regulators was not pushing for more information about the decisions being made by Wall Street. That would have enabled regulators to put up a stop sign before things got out of hand.

Learning from that bad example, an intelligently-enforced transparency provision for broadband providers—requiring them to release “accurate information regarding the network management practices, performance, and commercial terms of its broadband Internet access services”—would go an awfully long way to deterring abusive practices without interfering with innovation.

If the FCC had just stopped with its three rules, we could be heading for the best of all possible worlds …where the communications innovation ecosystem keeps growing, the providers earn enough profits to allow them to keep investing, but where transparency helps encourage them to be good stewards and not to be too greedy.

But not content to leave well enough alone, the FCC appears to have added a lot of extra verbiage to the order that muddies the waters,  to the point where I can’t even figure out what they are trying to achieve. I say ‘appears’ because all we have so far is excerpts from the text, rather than the full text itself.

If regulators can’t make rules that are clear and straightforward, it’s a sign they shouldn’t be doing it. I wait eagerly for the actual text of the order.

This piece is cross-posted at Mandel on Innovation and Growth

Genachowski Walks the Tricky Path

FCC Chairman Julius Genachowski should be given a measured round of applause for his proposed “rules of the road” for Internet openness. Genachowski addressed the core issues including a basic no-blocking rule and giving telecom providers the right to “reasonable network management.”  And he did so without putting an excessively heavy new regulatory burden on the communications sector.

The truth is, the FCC is walking a tricky path. The broad communications sector that the agency oversees, long-maligned, has turned into a crown jewel in today’s domestic economy—vibrant and dynamic. Yet the FCC has come under pressure to impose strict net neutrality rules—a nutty move that would have been the equivalent of doing invasive surgery on a healthy patient.

Instead, Genachowski and the FCC are following the basic principle of countercyclical regulatory policy — the government should stay away from imposing onerous new regulations on growing and innovative sectors such as communications while the economy is still sluggish.

Between now and the December 21st meeting of the Commission, Genachowski needs to make sure that his rules of the road stay as ‘minimally invasive’ as possible.  Attempts to broaden them, no matter how well-meaning, will have the effect of putting the communications sector on notice that any commercial negotiation, technical decision, or investment strategy could be second-guessed by regulators—not the best way to have rapid innovation or job creation.

Decoupling Taxes on Capital

The president will meet with leaders from both parties on Thursday to discuss Congress’s unfinished business for the lame-duck session, and the only thing that is clear going into that meeting is that item number one on the agenda (for right or wrong) will be the Bush tax cuts.  Speculation is running high this week that the White House is considering a compromise approach that would extend all of the Bush tax cuts temporarily, most likely for two years.  This comes in place of the previous round of speculation that the president’s strategy was focused on “decoupling” the tax breaks, meaning he would push for Congress to vote separately to permanently extend lower tax rates for all households making less than $250,000 per year, while allowing another vote on a temporary extension of the cuts for the two percent of taxpayers earning more than that.

As both sides prepare to dig in their heels for the coming tax fight, the possibility of policy alternatives has given way to a pure tug-of-war exercise, in which compromise is limited to questions of how long to extend the cuts or whether to draw the line at $1 million rather than $250,000.  The rare occurrence of a fresh approach is too quickly ignored, such as Senator Mark Warner’s op-ed last week calling for the high-income tax cuts to be redirected as targeted tax incentives for business to boost investment and jobs.

Warner’s proposal would likely be a far more effective way to put lost tax revenues into the most productive hands for lifting our economy, but it’s probably not on the table.

Both parties appear hell-bent on confining this battle to the provisions of the original Bush tax cuts, with the winner to be determined by which provisions do or do not get extended.  It’s an unfortunate corner we have painted ourselves into, but there are still important policy issues within this narrow debate that deserve greater attention and vigilance.

In a new memo released today, PPI Senior Fellow Michael Mandel acknowledges that the current tax debate has totally missed the most important big-picture questions about the need to modernize our outdated tax code for what he calls the “supply-chain world” of the 21st-century global economy.  However, Mandel points out specific elements of the Bush tax cuts that could actually help move us closer to the type of tax code we need for today’s economy: namely, the lower rates on dividend income and capital gains rates.

Mandel explains that keeping rates low on income from capital is critical for encouraging investment in critical innovative industries over the long-term, and that raising these rates right now would be a particularly bad idea, because our economy is still languishing in what he calls a “business investment drought.”  Compared to the data on consumer demand, government spending, and even the collapse in housing, Mandel concludes that the real hole in the economy is nonresidential investment, which has plummeted even more sharply than housing.  So while the tax debate has so far focused on the economic impact marginal tax rates would have on consumer spending, Mandel makes the case that we should be looking at the impact that upcoming tax votes will have on investment:

It doesn’t make sense to raise the tax rate on corporate dividends and capital gains in the middle of a U.S. investment drought. That’s true, whether you believe in Keynesian economics, supply-side economics or anything in between.

Taxing capital at too high a rate impairs the environment for innovation, especially in this world of permeable borders and mobile money. In particular, raising the tax rates on dividends is likely to hurt innovative industries such as telecommunications and pharmaceuticals, which tend to pay out dividends at a higher level than other industries.

I have raised similar issues about this potential problem of dividend rates before (mainly here, but also here), but Mandel’s analysis of investment brings the question into much sharper relief.  Unfortunately, the positions of the White House and Congress have been much less clear in this issue.   This year’s tax debate has been an exercise in gamesmanship more than a battle of ideas, so both the president and Democratic leaders have remained a little ambiguous about their proposals for these rates, largely because they don’t fit well with the line-drawing fight over whether the wealthiest Americans should have any of their tax cuts extended.

President Obama has said he supports keeping rates on dividends capped at 20 percent, in line with what the rate will be for capital gains income (both are currently taxed at 15 percent, but the dividend rate is scheduled to more than double in 2011 to 39 percent for taxpayers receiving the bulk of these payments).  Secretary Geithner has said the same.  Both men stopped short of saying outright that the 20 percent rate would apply to all taxpayers, even those making above $250,000, even though the president’s budget for 2011 spells it out explicitly.  The 20 percent rate has also been endorsed by Senate Finance Committee Chairman Max Baucus, who called it “good policy” to keep the rates in line with capital gains rates:

Changing dividends to 20 percent as opposed to ordinary income rates and keeping it the same as capital gains, I think, is good policy. I’m going for policy. Twenty percent on dividends and capital gains is the right policy.

Senator Baucus and President Obama both deserve enormous credit for “decoupling” good policy from the political gamesmanship over the Bush tax cuts, and Baucus should continue to advocate for the lower dividend rate to be included in whatever compromise proposals get thrown around in the coming days and weeks.  As Mandel writes in today’s memo, “the best we can hope for may be small steps in the right direction” from this Congress toward a smarter tax code that encourages sustainable growth and innovation.  Hopefully Obama and Baucus can avoid taking a step backward on this one.

How Do You Define the Internet?

One of the more interesting comments filed with the FCC in its recent Further Inquiry into Two Under-Developed Issues in the Open Internet Proceeding came from a group of illustrious computer industry stalwarts such as Apple hardware designer Steve Wozniak, computer spreadsheet pioneer Bob Frankston, Stupid Network advocate David Isenberg, and former protocol designer David Reed.

Their comments are worth noting not only because they come from such a diverse and accomplished group of people, but also because they’re extremely hard to follow (one of the signers told me he almost didn’t sign on because the statement was so unclear.) After reading the comments several times, asking the authors for clarification, comparing them to previous comments by a similar (but larger) group known as “It’s the Internet, Stupid,” and to an even older statement by a similar but larger group called the Dynamic Platform Standards Project (DPSP), I’m comfortable that I understand what they’re trying to say well enough to explain.

A Passion for Definition

The author of these three statements is Seth P. Johnson, a fellow from New York who describes himself as an “information quality expert” (I think that means he’s a database administrator, but it’s not clear.) Johnson jumped in the net neutrality fray in 2008 by writing a proposed law under the name of the DPSP and offering it to Congress.

The gist of the thing was to define Internet service in a particular way, and then to propose prosecution for any ISP that managed its network or its Internet connections in a way that deviated from the definition.  Essentially, Johnson sought authority from the IETF’s Internet Standards, but attempted to reduce the scope of the Internet Standards for purposes of his Act. The proposed Act required that ISPs make their routers “transmit packets to various other routers on a best efforts basis,” for example, which precludes the use of Internet Type of Service, Class of Service, and Quality of Service protocols.

IETF standards include a Type of Service (ToS) option for Internet Protocol (IP) as well as the protocols IntServ, DiffServ, and MPLS that provide mechanisms for network Quality of Service (QoS.) QoS is a technique that matches a network’s packet transport capabilities to the expressed needs of particular applications, ensuring that a diverse group of applications works as well as possible on a network of a given, finite capacity.  ToS is a similar method that communicates application requirements to one of the networks that carries IP datagrams, such as Ethernet or Wi-Fi. Packet-switched networks, from the ARPANET days to the present, have always included QoS and ToS mechanisms, which have been used in some instances and not in others. You’re more likely to see QoS employed on a wireless network than on a wireline network, and you’re also more likely to see QoS on a local network or at a network edge than in the Internet’s optical core; but the Internet’s optical core is an MPLS network that carries a variety of private network traffic at specified service levels, so there’s quite a bit of QoS engineering there too.

The purpose of defining the Internet as a QoS-free, “Best-Efforts” network was to prevent network operators from making deals with content providers that would significantly privilege some forms of sources of content over others. This approach originated right after Bill Smith, the former CTO of Bell South, speculated that ISPs might increase revenues by offering exceptional performance to select application providers for a fee. While the service that Smith proposed has a long history in Internet standards (RFC 2475, approved in 1998, discusses “service differentiation to accommodate    heterogeneous application requirements”), it’s not part of the conventional understanding of the way the Internet works.

Defining One Obscurity in Terms of Another

“Best-efforts” (BE) is a term of art in engineering, so defining the Internet in this way simply shifts the discussion from one obscurity to another. BE has at least three different meanings to engineers, and another one to policy experts. In the broadest sense, a BE network is defined not by what it does as much as by what it doesn’t do: a BE network makes no guarantee that any given unit of information (“packet” or “frame” ) transmitted across the network will arrive successfully. IP doesn’t provide a delivery guarantee, so the TCP code running in network endpoints such as the computer on your desk or the mobile phone in your hand has to take care of checking for lost packets and retransmitting when necessary. BE networks are appealing because they’re cheap to build, easy to maintain, and very flexible. Not all applications need for every packet to transmit successfully; a Skype packet that doesn’t arrive within 200 milliseconds can be dropped, for example. BE networks permit that sort of decision to be made by the application.  So one meaning of BE is “a network controlled by its endpoints.”

Another meaning of BE comes from the QoS literature, where it is typically one of many service options in a QoS system. In the Internet’s DiffServ standard and most other QoS systems, BE is the default or standard treatment of all packets, the one the network router employs unless told otherwise.

Yet another definition comes from the IEEE 802 standards, in which BE is the sixth of seven levels of service for Ethernet, better than Background and worse than all others; or the third of four levels for Wi-Fi, again better than Background. When policy people talk about BE, they tend to use it in the second of these senses, as “the standard treatment,” with the additional assumption that such treatment will be pretty darn good most of the time.

Johnson’s FCC filing insists that the Internet, properly defined, must be a best-efforts-only system; all other QoS levels should be considered “managed services” rather than “Internet.” The filing touts a number of social benefits that can come about from a BE-only Internet, such as “openness, free expression, competition, innovation and private investment” but doesn’t explain the connection.

Constraining Applications

One of the implications of this view is that both network operators and application developers must adapt to generic treatment and refrain from relying on differentiated services or offering differentiated services for sale as part of an Internet service.

Unfortunately, the advocates of this viewpoint don’t tell us why they believe that the Internet must refrain from offering packet transport and delivery services that are either better or worse than generic best-efforts, or why such services would harm “openness, free expression, competition, innovation and private investment” if they were provided end-to-end across the Internet as a whole, or where the authority comes from to support this definition. We’re supposed to simply trust them that this is the right way to do things, relying on their group authority as people who have been associated with the Internet in various capacities for a long time. This isn’t engineering, it’s religion.

There is nothing in the Internet design specifications (Internet RFCs) to suggest that providers of Internet services must confine themselves to BE-only, and there is nothing in the architecture the Internet to suggest that all packets must be treated the same. These issues have been covered time and again, and the FCC knows by now exactly where to look in the RFCs for the evidence that this view of the Internet is faulty. The Internet is not a packet delivery system, it’s a virtual network that only works because of the underlying physical networks that transport and deliver packets. This virtual network defines an interface between applications of various types and networks of various types, and as is the case in all abstract interfaces, it may provide least common factor services, highest common factor, or anything in between, all according to the needs of the people and organizations who pay for it, use it, and operate it. As Doc Searls said many years back, nobody owns the Internet, anyone can use it, and anyone can improve it. The capacity for constant improvement is the magic of the Internet.

Myth of the General Purpose Network

If we insist that the Internet must only provide applications with one service option, we doom application developers to innovate within narrow confines.  A generic Internet is effectively optimized for file-transfer oriented applications such as web browsing, email, and media streaming; it’s fundamentally hostile to real-time applications such as immersive video conferencing, telepresence, and gaming. Some of the best minds in the Internet engineering community have labored for past 20 years to devise systems that would allow real-time and file transfer applications to co-exist happily on a common infrastructure, and these efforts are perfectly consistent with the nature of the Internet properly understood.

The central myth underlying the view of the Johnson and his co-signers is the “general purpose network” formulation. This terminology is part of telecom law, where it refers to networks that can support a variety of uses. When adapted to engineering, it becomes part of an argument to the effect that best efforts is the “most general purpose” method of supporting diverse applications and therefore the “best way to run a network.” I think it’s wrong to frame the challenges and opportunities of network and internetwork engineering in this way. I’d rather that people think of the Internet as a “multi-purpose network” that can offer diverse packet transport services suitable for diverse applications.  We want network operators to build networks that serve all applications appropriately at a price that ordinary people can afford to pay. We don’t want consumers to pay higher prices for inefficient networks, and we don’t want to foreclose application innovation to the narrow bounds of legacy systems.

Segregated Systems are Harmful

Systems that allow applications to express their requirements to the network and for the network to provide applications with differentiated treatment and feedback about current conditions are apparently the best way to do this; that’s the general concept of Internet QoS. This has been the thinking of network and internetwork engineers since the 1970s, and the capability to build such systems is embedded in the Internet architecture. The technical people at the FCC who are reading the comments in this inquiry know this.

These arguments seem to endorse a disturbing trend that the so-called “public interest” advocates are now advancing, to the effect that advanced network services must be segregated from generic Internet service on separate (but equal?) physical or logical facilities. This is not good, because it robs us of the benefits of converged networks.  Rather than dividing a coax or fiber into two frequencies and using one for IPTV and the other for Generic Internetting, it’s better to build a fat pipe that provides IPTV and Generic Internetting access to the same pool of bandwidth. The notion of sharing a common pool of bandwidth among multiple users and applications was the thing that started us down the road of packet switching in the first place, and it’s very important to continue developing that notion; packet switching is the Internet’s enabler. Segregated facilities are undesirable.

Integrating Applications and Networks

What we need in the Internet space is a different kind of vertical integration than the kind that was traditional in the single application networks of the past. QoS, along with modular network and internetwork design, permits applications and end users to essentially assemble networks as applications are run that provide them with the level of service they need at the price they can afford. We get to that by allowing applications to explicitly state their requirements to the internetwork, and for the internetwork to respond with its capabilities. Application choice meets the needs of innovators better than by a rigid “one size fits all” formulation.

The Internet is, by design, a platform for both generic and differentiated services. That’s its true legacy and its promise. We don’t need to run into historical blind alleys of myth and prejudice when the opportunity faces us to build this platform out to the next level. As more Internet use shifts to mobile networks, it will become more critical than ever to offer reasonable specialization to applications in a standards-compliant manner. The Internet of the Future will be multipurpose, not generic.

Photo credit: Pixelsior

Internet Wars: A Who’s Who Guide

Back in the day, there were no protesters outside corporate headquarters in Silicon Valley, no one had a position on net neutrality because no one knew what is was, and technology journalists were breathlessly trying to keep pace with new technologies and companies instead of holding forth on civil rights and liberties or network engineering protocols.

But ten or 15 years in the life of the Internet is a long time.  The Internet is the transformative phenomenon of our time and its role in our lives raises serious questions about who the Internet “belongs” to, whether it is used for good or ill, what are its technological limits, and what role government has as arbiter of its future.  The debates on these and other questions has become passionate and shrill, generating more heat than light at times.  A person trying to follow the debate might need a field guide to sort through the wide array of groups and their philosophical or economic orientation.  Allow me to offer up this breakdown, the details of which are spelled out in “Who’s Who in Internet Politics: A Taxonomy of Information Technology Policy,” a new report from the Information Technology and Innovation Foundation.

In the report, ITIF lays out the following eight categories:

Cyber-Libertarians – Think of them as the original “netizens” and purists who believe the Internet should be governed solely  by its users that and “information wants to be free.”  Privacy and piracy will take care of themselves by the individuals who make up the organic and living Internet and not by government. Groups include the Free Software Foundation and the Electronic Frontier Foundation

Social Engineers – Mostly liberal, they see a lot of good in the Internet as an education and communications tool but they worry about the “digital divide,” privacy, net neutrality, and a concentration of power by both government and major corporations.  These issues could erode the Internet’s capacity to be a tool for good for all.  Among groups are the Benton Foundation, Center for Democracy and Technology, Center for Digital Democracy, Civil Rights Forum on Communication Policy, Consumer Project on Technology, Electronic Privacy Information Center, Free Press, Media Access Project, and Public Knowledge, and scholars such as Columbia’s Tim Wu, MIT Media Laboratory’s David Reed, academics at Harvard’s Berkman Center (among them Larry Lessig and Yochai Benkler).

Free Marketers – Unleash the entrepreneurs! This group views the digital revolution as the great third wave of economic innovation in human history and a dynamic and liberating force that the government should mostly keep out of it. Groups include the Cato Institute, the Mercatus Center, the Pacific Research Institute, the Phoenix Center, the Progress & Freedom Foundation, and the Technology Policy Institute.

Moderates – Unabashedly pro-IT, they see the Internet as this era’s driving force for both economic growth and social progress and they believe a light touch from government is useful in helping the Internet reach its potential.  “Do no harm” to limit to IT innovations but also “actively do good” is their mantra. Examples of moderates include the Center for Advanced Studies in Science and Technology Policy, the Center for Strategic and International Studies, ITIF, and the Stilwell Center.

Moral Conservatives – These groups see the Internet as an often smutty and dangerous place teeming with pornographers, gamblers, child molesters, terrorists that only government can keep at bay. They pushed for passage of the Communications Decency Act and Child Online Protection Act, Internet filtering in libraries, and worked to push legislation to ban online gambling.  Examples are groups like the Christian Coalition and Focus on the Family, and around the world with countries like Indonesia, Thailand, Saudi Arabia and other religiously conservative nations that seek to limit activity on the Internet.

Old Economy Regulators – This group believes the Internet should be regulated in the same way that government regulates everything else. Otherwise, you have chaos and inequities.  Examples of this group include law enforcement officials seeking to limit use of encryption and other innovative technologies, veterans of the telecom regulatory wars that preceded the breakup of Ma Bell, legal analysts working for social engineering think tanks, as well as government officials seeking to impose restrictive regulatory frameworks on broadband.

Tech Companies & Trade Associations – Software and communications giants, Internet start-ups, and the groups that represent them, these tech interests tend to believe that regulation can be both advantageous and detrimental, depending on their particular business model.  They also advocate policies that are good for the technology industry or the economy in general. Examples include IBM, AT&T, and Hewlett Packard, Cisco Systems and Microsoft, and recent phenomena in the market such as Google and Facebook, as well as trade associations like the Information Technology Industry Council and the Association for Competitive Technology. They delve into trade, tax, regulatory, and other public policy issues from a bottom-line perspective rather than a philosophical basis.

Bricks-and-Mortars – This group includes the companies, professional groups, and unions that use the Internet but also see it eroding the old-economy and face-to-face business transactions and they struggle to hold back the tide. These include both producers and distributors and middlemen (such as retailers, car dealers, wine wholesalers, pharmacies, optometrists, real estate agents, or unions representing workers in these industries). The long running battle over taxing Internet sales illustrates their struggle.

Of course, individual groups defy rigid characterization.  For example, Moral Conservatives might find themselves on the same side of an issue as Social Engineers.  Also, consensus is often elusive in trade associations as member companies often have complicated interrelationships or niches in the market.  However, whether you lean more toward advancing the interests of the individual or society as whole, see government regulation as generally useful or harmful, or are wary of the Internet’s influence or enthusiastic about it is useful to understanding where various groups stand.  You might need Venn diagrams to fully understand the Internet policy landscape when surveying issues such as piracy, net neutrality, intellectual property rights, and Internet sales taxes.  (An unusual pursuit, to be sure.)

One common theme in all these groups is that they almost certainly believe they are advocating sound policies and doing the right thing for individuals and for society – as incomprehensible as that might seem to those from an opposing organization.  In some cases, their passion for their beliefs makes for a good sound bite in a news story.  The societal destruction by a government that is scheming to implant chips in our heads is an easier story to sell than an explanation of how packets are sorted on broadband networks. And this is dangerous.

Internet and technology debate is being politicized and degraded.  And misguided and ill-informed debates lead to misguided and ill-informed policies. We have enough of people vehemently opposing bills they haven’t read or crafting policy from bumper stickers and making caricatures of opponents.   The Internet’s transformation is really just beginning so people in government, the media, and the public at large need to refine and update their understanding of the philosophical issues, the players, the economic realities, and societal issues as stake.  Wherever you come down on a range of tech policies – whether you carry placards outside of Facebook’s offices or decide to get an engineering degree to figure out net neutrality – it is essential to understand the political and policy landscape that didn’t exist just 20 years ago.  And now you have a map.

Photo credit: Stefan

The case for “smart regulation”

Michael Mandel has an op-ed explaining his plan for “smart regulation” up over at CNN.com today.

Mandel starts by noting that the one sector of the economy where there has been real growth of late is the digital communications sector. And given how hard new jobs are to come by in this current economy, Mandel figures we ought to keep growth going where we can by limiting the temptation to write too many new rules in the telecom sector:

What’s needed from regulators now is some creativity and humility – in the form of “countercyclical regulatory policy.” This gives innovators a bit of breathing space at the start of an economic recovery, but sets the stage to tighten regulations later on if excesses develop.

For example, Mandel argues that now is not the time for any new net neutrality rules:

For that reason, I suggest a two-year pause in new broadband regulation, keeping the current balance among the different players, which seems to be generating growth. At the same time, the knowledge that the regulator remains on duty, ready to intervene, would provide an essential check.

However, Mandel is clear that counter-cyclical regulation is not the equivalent of no regulation:

This approach does not mean regulators can go to sleep nor does it mean they can raise the flag of laissez-faire. What’s needed is the nuanced judgment of sentries posted at a tense border spot. With watchful eyes, regulators must practice thoughtful restraint that allows space for job leaders to innovate and hire, while remaining ready to aggressively confront violations of law or abuses of consumer rights if they take place.

It’s a compelling argument, and if you still want to learn more after reading the op-ed, you’ll definitely want to read Mandel’s recently released PPI Policy Memo, “The Coming Communications Boom? Jobs, Innovation and Countercyclical Regulatory Policy.”

Is the Google-Verizon Proposal a Killer App in the Broadband Debate?

Google and Verizon have finally released the details of the policy proposal they have been negotiating for nearly a year now, and the news has generated enormous chatter around Washington and across the blogosphere, with bloggers panning it andwatchdog groups warning of the end of the internet as we know it.

Obviously, advocacy groups on both sides are focused on the substance of the agreement. But I am more interested in what this means for the policy process, and how effective it will be in nudging Congress and the FCC to clarify the rules of the game for broadband internet service.What these two companies have provided is helpful: a concrete policy proposal that Congress and the FCC can consider, and that imposes a framework for targeted comments from the industry and watchdog groups.

In fact, given the weight of these two companies and the collapse last week of the FCC’s attempts at talks, the roll-out for this proposal may make it a “killer app” in the broadband debate (and not simply an internet killer, as some are calling it).Now that Google and Verizon have put a policy proposal on paper, it becomes the baseline that everyone else has to support or oppose to some degree, including FCC commissioners and members of Congress.Pressuring leaders to make decisions is an appropriate goal, and that’s what this proposal does.

As for the proposal itself, it should be judged as a work in progress.Many of the principles themselves are worthy goals: giving consumers freedom to choose content, applications, and devices; requiring more product transparency from service providers, and prohibiting paid fast lanes for internet traffic. The recommendation that the FCC have real teeth to enforce violations of the proposed rules on a case-by-case basis is a good one.

If the kind of self-regulation proposed for the broadband internet industry is going to be successful, there also needs to be enough competition in the market to empower consumers to punish service providers for violating the principles that Google and Verizon have laid out.That means that in addition to policing the market for bad apples, the FCC needs to be vigilant in monitoring the health and competitiveness of the market for broadband internet access.If there are enough companies offering similar services, and the FCC and watchdog groups hold companies publicly accountable for their behavior by informing consumers of violations, consumers can play a valuable role in policing the market by switching providers when they feel their content or services are being unfairly restricted.

Both CEOs acknowledge that “no two companies should be so presumptuous as to think they can solve this challenge alone,” and no one should see this as an end to the debate.Verizon and Google have given everyone involved a chance to speed up the process by narrowing the conversation to actual yes-or-no decision making.I commend these companies for at least trying to move the ball forward with a good-faith proposal.

Photo Credit:  Peter Huys’s Photostream

Behind the Big Paywall

Anyone who has been active in politics since the prediluvian era of the 1990s can probably remember a time when a central event of every weekday was the arrival on the fax machine of The Hotline, once the Daily Bread of the chattering classes.

You can revisit those days–or, if you are younger, discover them–via a long article at Politico by Keach Hagey that examines The Hotline’s past, present and future in some detail. It certainly does bring back memories:

Howard Mortman, a former columnist and editor at The Hotline, remembers the first time he saw the process — a blinking frenzy of subscribers dialing in by modem, one by one, to get their pre-lunch politics fix.

“We would publish at 11:30, and you could go downstairs and see the lights flicker as people downloaded The Hotline from the telephone bulletin board,” he said. “At that time, in 1995, that was cutting-edge technology.”

Today, The Hotline is still putting out its exhaustive aggregation of cleverly titled political tidbits at 11:30 a.m., though subscribers hit a refresh button instead of a fax number to get it. But the sense of cutting-edge technology and unique content is gone, eclipsed by an exponentially expanding universe of political websites, blogs, Twitter feeds, Google alerts and mobile apps that offer much of what a $15,000 annual office membership to The Hotline offers — but faster and for free.

In effect, Hotline was the first “aggregator,” and as a result was an exceptionally efficient and even cost-effective way to obtain political news at a time when clipping services were the main alternative. And for all of Hotline’s gossipy Washington insider attitudes, it did cover campaigns exhaustively, from coast to coast, in a way that was virtually unique at the time.

If you are interested in the process whereby The Hotline has struggled to survive in the online era, or in the cast of media celebrities who got their start there, check out the entire article, with the appropriate grain of salt in recognition of the fact that Politico views itself as a successor institution.

The takeaway for me, though, is the reminder that for all the maddening things about blogs and online political coverage generally, it’s really remarkable how much is now available to anyone, for free, 24-7–material that is shared by the DC commentariat and, well, anybody who cares to use it. In The Hotline’s heyday, its subscribers (concentrated in Washington but scattered around the country) really did represent a separate class with specialized access to information that created and sustained a distinct culture.

If you have money to burn, there are still paywalls you can climb to secure a privileged perch from which to observe American politics, just as you can obviously learn things living and working in Washington or frequenting its real or virtual watering holes that wouldn’t be obvious to others. But we have come a long way. And it’s actually wonderful that the entire hep political world no longer comes to a stop shortly before noon, in some sort of secular Hour of Prayer, in anticipation of The Word rolling off the fax machine.

This item is cross-posted at The Democratic Strategist.

Photo Credit: Grass Compass Church’s Photostream

Wikileaks: Lack of Editorial Discretion

Does the existence of a whistle-blower website like Wikileaks do more harm or good? Decisions about exposing information to the public depends on nuance and context, and it’s clear that in the wake of this case, Julian Assange, the site’s editor-in-chief and public face, has little appreciation for either.

Wikileaks is, in effect, a conduit for purported whistle-blowers, and describes itself as a “buttress against unaccountable and abusive power” and prides itself on “principled leaking.”

As a vehicle for whistle-blowing, the site has a responsibility to assert editorial discretion about the content it supplies, carefully weighing costs and benefits to the whistle-blowing party, those the information directly impacts and third parties. If Wikileaks is an open-repository for secret information without discretion and vetting, that’s a problem.

Prior to releasing the current military documents, the site should have exercised discretion with the following criteria in mind:

— Does the totality of the information indicate unequivocal, fact-based wrongdoing?
— Is this information new? Does it add to the public debate?
— Does its release endanger or save lives?
— Does its release cost or save public money?

By its own standard, Wikileaks, at best, punted. More likely, it outright failed and discredited itself.

Assange could not make a reliable judgment about the totality of the information he released because he could not have possibly known what exactly he was releasing. With Wikileaks staff reportedly of about five full-timers and a budget of $300,000, it’s difficult to imagine how the site could have shifted through so many documents and assembled a reasonable cost-benefit analysis, even with an “army” of hundreds of part-time volunteers. Rather, he essentially outsourced vetting to The New York Times, Guardian, Der Spiegel, and other websites that have cattle-called hungry readers to sift through the material. Ergo, Wikileaks likely had no idea if it was releasing ironclad evidence of wrongdoing.

Second, as I detailed yesterday, the information was clearly not “new.” It only served to amplify public debate. Further, the information’s release likely endangered American lives, and certainly jeopardized American sources in methods and consequently, its safety.

Finally, it’s unclear about saving public money, unless you argue that ending the war would do so. But that argument, much like the answers to all of the above, suggest that Assange and Wikileaks are motivated much more by activism than journalism. And that discredits any strain of legitimate public service the site hopes to render in the future.

From now on, Wikileaks would do well to know exactly what it’s releasing, know that it’s a new fact, and weigh the balance of lives, security and money.

Photo Credit: Joe-manna’s Photostream

Lieberman’s Cyber Bill Causes Consternation Among Dems

Late last week, Sen. Joe Lieberman (I-CT) unveiled a draft bill that seems to be causing some anxiety among progressives.

Certain provisions in the bill seem to be reasonable – like creating a National Center for Cybersecurity and Communications and an Office for Cyber Policy — and should strengthen American defenses in an increasingly vulnerable climate (particularly as the China cyber threat is on the upswing). But others have split Democrats.

There seem to be three camps — civil libertarians, Democrats on the Hill working on the cyber issue and the White House.

Civil libertarians are concerned about this provision of the bill, which would provide the president with the power to declare a national cyber emergency and essentially compel owners of critical cyber infrastructure to subjugate themselves to the president’s direction. In other words, civil libertarians are making the case that with an emergency declaration, the president could close the Internet.

Lieberman has tried to explain the provision, saying “the government should never take over the Internet.” But his explaination fails to bridge the gap between a complete “taking over” and an ill-defined and vague emergency provision that his bill provides for.

But cyber-congressmen (a term I’m laying claim to) have come out in support of Lieberman’s bill:

In an unusual show of bipartisanship, two prominent senior members of the House panel — California Democrat Jane Harman and New York Republican Peter King — announced plans to co-sponsor and introduce a companion bill in the House to S. 3480, introduced last week by Senators Joe Lieberman (ID-Conn.), Susan Collins (R.-Maine) and Tom Carper (D.-Del.).

“I agree with Mr. King that the Lieberman-Collins bill is excellent,” declared Harman, adding, “I do plan to co-sponsor the bill with him…I think it is an excellent effort. I’m sure it will change as it goes through the legislative process, but I do think it will be good to work with our counterparts in the Senate on this, as we worked with our counterparts in the Senate on the Safe Ports act.”

While supporting tough cyber legislation is certainly laudable, questions of motivation hang in the air. Is support for the bill born of a desire to seek genuine bipartisan compromise, an attempt to pass major legislation that members are responsible for in an election year, or because of the reported overtures to cyber-business? Or all three? Or something different?

Then there’s the White House. Deputy Under Secretary for the National Protection and Programs Directorate for the Department of Homeland Security (there’s a mouthful) Philip Reitinger testified that:

[T]he administration’s review of the bill, which was released last week, is incomplete and could not give a timeline on when this would be done. He mentioned that revisions of the bill should be aware that the president already has certain emergency powers and care should be taken to avoid overlapping the law.

As such, the bill was declined the Obama administration’s endorsement in the hearing. Instead, the deputy suggested that the current Section 706 of Communications Act should be used as a foundation for revisions in the law, as opposed to the creation of a new one.

That part in bold is the upshot — no matter what the support or concerns are, the bill won’t become law unless the administration fully supports it. At this point, that’s unlikely without significant revision.

Photo Credits: Tsakshaug’s Photostream

After Comcast, What’s Next for Net Neutrality?

Congress is gearing up to reopen the Communications Act of 1934 in order to come up with what it hopes will be a better way to make sure as much information flows through the Internet as possible and in a manner fair to consumers, service providers and other stakeholders. During a panel discussion co-sponsored by the Free State Foundation and the Information Technology and Innovation Foundation, it was clear that the coming debate on the future of America’s Internet policy in general and its net neutrality policy in particular will continue to be a lively one.

Congress has effectively advised the Federal Communications Commission (FCC) not to reclassify Internet edge networks –- cable, DSL, FTTx and wireless –- under Title II of the Communications Act. A majority of House members signed letters last week to that effect, and while these letters don’t have the force of law, they’re certainly significant statements of congressional sentiment. The FCC is, after all, a creature of Congress that isn’t entitled to operate outside the scope of its statutory authority, regardless of how noble its motives may be or how urgent the problems it seeks to address are.

The paramount questions for the immediate future concern the shape of Internet policy, and most of the answers must come from Congress. Jim Cicconi of AT&T and moderator Rob Atkinson of ITIF pointed out that the net neutrality debate has sucked the oxygen out of the room on Internet policy for the past five years. Instead of developing plans for national purposes of the Internet and ensuring that it reaches all Americans at reasonable speeds and prices, the policy community has struggled with questions about packet discrimination and “reasonable network management.” While we’ve been obsessing over how to differentiate good network operator behavior from bad, other nations have leapt ahead of us in broadband speed, adoption, or both. Even after the unveiling of a National Broadband Plan, the public debate continues to focus too much on hypothetical anti-consumer behavior by network operators and service providers.

Five years ago, panelist Randy May of the Free State Foundation developed a model law for the Internet called the “Digital Age Communications Act” (DACA) that sought to update the 1934 Communications Act that governs the FCC. Under the DACA framework, regulators can only take action on incidents in which a broadband provider was enforcing policies harmful to consumers in non-competitive markets. The virtue of DACA is its simplicity – it forswears technical prejudgment of particular management practices – but it has attracted criticism from those who find it too strict as well as from those who find it too permissive; it’s not clear why a market power test is relevant once a given practice has been found to harm consumers, for example. Questions of this sort must ultimately be addressed by Congress, as they pertain to the policy space and aren’t simply matters of regulation.

Professor James Speta of Northwestern warned that the “Title II with forbearance” approach to Internet regulation proposed by FCC chairman Julius Genachowski is inherently unstable. (Under this idea, Title II would apply to the Internet, except for the parts of Title II that don’t.) Obviously, the reclassification itself raises troubling legal issues, and is certain to cause litigation. As the outcome of the litigation is uncertain, it would likely take years to resolve its status. The forbearance process is a second source of instability, because regulations can be imposed and withdrawn so easily as matters of forbearance. While the FCC’s proposed “Third Way” built on reclassification and forbearance appears to offer a short cut to an Internet regulation framework, its expeditious character is probably more an illusion than a reality.

A number of panelists addressed the question of what to do while we’re waiting for Congress to draft an Internet policy. Eric Klinker, CEO of BitTorrent, Inc., pointed out that industry deals with questions of Internet management through self-regulatory and other cooperative efforts. BitTorrent, Inc. was not a party to the complaint against Comcast dealt with by the previous FCC – its competitor Vuze, Inc. filed the petition. BitTorrent took a very different approach, meeting with the Comcast network operations team to determine the nature of the problem that motivated them to actively manage parts of the network as they did and to map out a better solution. Rather than seeking regulatory relief, BitTorrent developed a better protocol, uTP, which yields to interactive applications but saturates network links when no other applications are active. BitTorrent improved the Internet in a way that no regulatory action can.

The self-regulatory systems that have emerged from the broadband and Internet markets organically have been largely effective, but they may need to be supplemented with more active government involvement in the future. Whether this happens, and if so, how it happens, are likely to be the subject of debate in the near future — but that debate should take place in the Congress, not at the FCC.

Progress Report: Revisiting Rules of the Road for a New Economy

I know you’re not supposed to tout your own work on blogs, but for this, my inaugural post for Progressive Fix, I can’t resist.

When PPI established its New Economy Task Force 11 years ago, its first product was a pamphlet entitled “Rules of the Road: Governing Principles for the New Economy.” In Internet time, 11 years is a lifetime. But that short but powerful statement still holds up — and, I would argue, is just as relevant today as it was in 1999. This seems as good a time as any to revisit what we said and take stock of how far — or not — we’ve come.

The pamphlet started off with this statement:

The U.S. economy has undergone a profound structural transformation in the last decade and a half. The information technology revolution has expanded well beyond the cutting-edge high-tech sector. It has shaken the very foundations of the old industrial and occupational order, redefined the rules of entrepreneurship and competition, and created an increasingly global marketplace for a myriad of new goods and services.

I would venture to say that it’s even truer today than when we first wrote it. The introduction went on to state:

Yet while economic reality is fundamentally changing, much of our public policy framework remains rooted in the past. This mismatch between public policy and economic reality is not sustainable. … On one side of the political spectrum, policymakers advocate across-the-board tax cuts, a dramatically reduced role for government, and elimination of social regulations. … On the other side of the political spectrum, policymakers advocate increased spending on top-down social programs geared toward income redistribution, coupled with a focus on command-and-control regulation through bureaucratic institutions, ignoring just how entrepreneurial, fast moving, and flexible our economy has become. Furthermore, resistance from both ends of the political spectrum to open trade, global integration, and technological and organizational change threatens to slow the economic changes that hold great potential to yield higher standards of living for American workers

After 11 years, while some progress has been made, all too often policy-makers still view economic and technology challenges through either of these lenses. And those resistant to change, whether groups advocating for strict regulations on “network neutrality” and “Internet privacy,” or restrictions on globalization and trade, continue to be active, if not more so.

How Far Have We Traveled?

The guide offered 10 key rules to policy-makers to encourage an innovation-driven economy. How have we done on those prescriptions? Let’s go down the list:

Rule #1: Spur Innovation to Raise Living Standards

….Because innovation and change are disruptive, they tend to spark strong political demands to insulate affected segments of the economy and slow down economic change. Such demands, while understandable, inherently deny opportunities to less politically powerful interests in the guise of “protecting” those with clout. As a result, to effectively promote growth in the New Economy, government must facilitate, rather than resist, the processes of economic change and modernization as these changes create new opportunities and increased incomes for all Americans.

Unfortunately, the urge to protect the status quo is powerful, as Washington still shows little appetite for upsetting it by enabling or promoting innovation.

Rule #2: Expand the Winners’ Circle

Ensuring that the benefits of innovation and change are spread broadly will require that all Americans, including those not yet engaged in or benefitting from the New Economy, have access to the tools and resources they need to get ahead and stay ahead.

We’ve made some progress here, not the least of which was expanding health care coverage to more Americans (though the effects of reform won’t be felt for years). But more needs to be done, particularly in areas like unemployment insurance reform and better access to lifelong learning.

Rule #3: Invest in Knowledge and Skills

To spur innovation and equip citizens to win in the New Economy, government should invest more in the knowledge infrastructure of the 21st century: world class education, training and life-long learning, science, technology, technology standards, and other intangible public goods. These are the essential drivers of economic progress today.

Not many in Washington would disagree. But it’s a different matter altogether to muster the political will to increase investments in these areas, particularly when it means cutting old economy spending, such as agricultural subsidies.

Rule #4: Grow the Net

The Internet is a critical component of the emerging digital economy. …The information technology revolution is transforming virtually all industries and is central to increased economic efficiency and productivity, higher standards of living, and greater personal empowerment.

Governments must avoid policies and regulations that would inhibit the growth of the Internet or slow progress by protecting business interests threatened by the digitization of the economy. Policymakers should craft a legal and regulatory framework that supports the widespread growth of the Internet and high-speed “broadband” telecommunications, in such areas as taxation, encryption, privacy, digital signatures, telecommunications regulation, and industry regulation (in banking, insurance, and securities, for example).

In some ways Washington has embraced this message. The inclusion of billions of dollars for support for the smart grid, health IT and broadband in the stimulus package was a key step in the right direction. On the other hand, the growing interest in regulating the Internet — such as overly restrictive net neutrality and privacy regulations — suggests that we have gone in the wrong direction.

Rule #5: Let Markets Set Prices

In the old economy, government often regulated prices when national markets were dominated by oligopolies or monopolies. In those cases, the economic costs of government intervention were manageable, and sometimes necessary. But in the new, more competitive global economy, distorted prices are much more likely lead to economically inefficient decisions by consumers and producers and to unfair, politically driven resource allocation. Therefore, in the absence of clear market failures, markets, not governments, should set prices of privately provided goods and services.

It’s still hard for many policy-makers to embrace this rule, but it’s as valid today as it was a decade ago.

Rule #6: Open Regulated Markets to Competition

Economists have long acknowledged that competition keeps prices down. The New Economy creates another critical reason for competition: competition drives innovation, and ultimately provides the greatest benefits to consumers and citizens. Of course, government must continue to provide common-sense health, safety, and environmental regulations. However, government should move away from regulating economic competition among firms and instead promote competition … Through minimalist, yet consistent rules, public policy should also ensure that consumers have the information they need to make educated choices and provide a backstop to protect consumers and citizens from abuse in markets.

Like rule # 5, it’s hard for some policy-makers to resist intervention to regulate competition. We see it most clearly in telecommunications, where some still argue that more government-enforced competition is needed.

Rule #7: Let Competing Technologies Compete

Technological innovation has now become central to addressing a wide range of public policy goals, including better health care, environmental protection, a renewed defense base, improved education and training, and reinvented government. For example, technology provides doctors and patients with state-of- the-art health information systems that improve the quality of care. Similarly, new generations of cleaner technologies can dramatically reduce pollution generated by industrial processes. … We should look for technology-enabled solutions to public problems, but not so that today’s winners are frozen in place at the expense of tomorrow’s innovators.

Amen. While government does need to target technology areas (e.g., clean energy, IT, robotics, etc.), it shouldn’t pick specific technologies within those sectors.

Rule #8: Empower People With Information

In the old economy, information was a scarce resource to which few outside of large corporations and governments had access. In the New Economy, constant innovations in ever-lower-cost information technologies have enabled increasingly ubiquitous access to information, giving individuals greater power to make informed choices. Governments should encourage and take advantage of this trend to address a broad array of public policy questions by ensuring that all Americans have the information they need as consumers and citizens.

Progress on this front: The recently announced National Broadband Plan, for example, takes a number of important steps in this direction.

Rule #9:Demand High-Performance Government

Government should become as fast, responsive, and flexible as the economy and society with which it interacts. The new model of governing should be decentralized, non-bureaucratic, catalytic, results-oriented, and empowering. …

When designing solutions to compelling public concerns, such as reducing industrial pollution or delivering world-class public education, government should hold organizations and individuals accountable for meeting goals, while allowing them flexibility to achieve those goals. In many cases, industry self-regulation can achieve public policy goals in ways that are more flexible and cost effective than traditional command-and-control regulation, while also enabling technological innovation.

Procedurally, governments should use information technologies to fundamentally reengineer government and provide a wide array of services through digital electronic means to increase efficiency, cut costs, and improve service. Digitizing government is the next step in re-engineering government.

Washington may give lip service to #9, but when the rubber hits the road, much is still the same. Perhaps the main area of progress is using IT to transform government, but even here a great deal remains to be done.

Rule #10: Replace Bureaucracies With Networks

In the old economy, bureaucracy was how we addressed many major public policy problems. In the New Economy, we must rely on a host of new public-private partnerships and alliances.

Rather than acting as the sole funder and manager of bureaucratic programs, New Economy governments need to co-invest and collaborate with other organizations — networks of companies, universities, non-profit community organizations, churches, and other civic organizations — to achieve a wide range of public policy goals.

Yet public policy has only begun to explore the potential of bottom-up, decentralized networks assuming the lead role in solving pressing societal problems. Government needs to co-invest in these efforts and foster continuous learning through the sharing of best- practice lessons. Most importantly, the collaborative network model requires government to relax its often overly rigid bureaucratic program controls and instead rely on incentives, information sharing, competition, and accountability to achieve policy goals.

Of the 10 rules, this last one may be the hardest for policy-makers to embrace. The legacy of government bureaucracy and “programs” as the solution to our problems — rather than government-enabled networks — is so deeply held that new approaches are not even considered in many cases.

More than a decade since we first published these rules, it’s clear that many of our prescriptions remain unheeded. Whether or not you embrace the term “New Economy” is not the point. The U.S. economy is fundamentally different than it was two decades ago. To pretend that it hasn’t changed, and to continue ignoring the shifting landscape, will consign us to economic stagnation. That rules of the road issued in 1999 remain relevant today underscores just how little progress was made in the 2000s, and how much work needs to be done to fully bring America into the 21st century.  Policy makers and stakeholders from across the political spectrum need to move beyond the talking points from another generation and embrace policies based on today’s realities.

The views expressed here do not necessarily reflect those of the Progressive Policy Institute.

The Facebook Kerfuffle

With our limitless capacity for outrage these days, it’s always nice to read a sober and reasonable take on the kerfuffle du jour. Today, it’s ITIF’s Daniel Castro’s memo on the Facebook privacy imbroglio.

For those who missed it, the world’s most popular social networking site has come under fire for its recent changes to its privacy policy. In the crosshairs are two new innovations: instant personalization and social plugins. Instant personalization is a pilot program that allows a few partners — Yelp, Microsoft Docs, and Pandora, to date — to use data from a Facebook user’s personal profile to customize their experience on their site. The latter enables websites to place a Facebook widget on a page, which would allow users to click on a “Like” button or post a comment that would automatically show up on a user’s Facebook feed. In both cases, users have to opt out of the service if they don’t wish to use it.

The changes predictably sparked an uproar from Facebook users and privacy advocates. The indignation even swept through the halls of Congress, with lawmakers registering their displeasure. But as Castro reminds us, it’s all much ado about not much:

Many Internet companies clearly intend to continue to find innovative ways to use personal data to deliver products and services to their customers. While Facebook CEO Mark Zuckerberg may or may not “believe in privacy”, it is clear that Facebook thinks that companies should respond to changing social norms on privacy and that the overall trend is towards more sharing and openness of personal data. So going forward, no Facebook user (or privacy fundamentalist) can continue to use the service without admitting that the benefits of using the website outweigh any reservation the user has about sharing his or her personal data. As the saying goes, “Fool me once, shame on you. Fool me twice, shame on me.”

Certainly some users may still object to this tradeoff. But if you don’t like it, don’t use it. Facebook is neither a right nor a necessity. Moreover, it is a free tool that individuals can use in exchange for online advertising. In fact, one high-profile Facebook user, the German Consumer Protection Minister Ilse Aigner, has already threatened to close down her Facebook profile in protest of Facebook’s new privacy policies. Users that feel this way about Facebook’s changes should vote with their mouse and click their way to greener pastures. Companies respond to market forces and consumer demands, and if enough users object to the privacy policy of Facebook, these individuals should be able to find a start-up willing to provide a privacy-rich social networking experience.

Castro doesn’t weigh in on whether Facebook did, in fact, violate its stated privacy policy, leaving that question to the Federal Trade Commission and noting that any organization that deviates from its policy should be held liable.

But the outcry that greeted the revelation that a corporation might use valuable consumer information for its benefit is a real Capt. Renault moment. My guess is that after the fuss dies down, most users will stay on Facebook, recognizing that the benefits of using it outstrip the risks and inconveniences. If the scandal makes people more informed and vigilant about personal data and privacy — both online and off- — then all the better.

Of course, online privacy remains a big, unresolved issue, and we need to continue to press government to update our laws to protect consumers in a fast-evolving information environment. But, as Castro points out, the next time Facebook changes its privacy policy, “let’s not act like this is a national emergency.” We consumers actually have a lot more power than we think we have.