Last week’s The Economist leader and cover story, “Picking winners, saving losers”, painted an insidious picture of governments’ increasing intervention in market economies, arguing that the hideous Leviathan of the state was gobbling up one sector after another and warning that “picking industrial winners nearly always fails.” Now, put aside the fact that the government was forced into some sectors—such as automobiles and financial services—only after mammoth market failures and pleas for rescues from capitalism’s chieftains. The more important fact is that the article feeds a Socialism-is-coming hysteria and ignores how picking winners—within limits—has worked in the past for the United States (and Japan, South Korea, etc.) and is needed more than ever to bolster our long-term competitiveness.
Of course, the debate about the appropriate role between the state and the private sector in market economies has raged for centuries. The debate is marred in part by vague terminology, and The Economist perpetuates this problem by throwing around a slew of terms—“picking winners”, “industrial policy”, “innovation policy”—without adequately distinguishing between them but while uniformly indicting them as inappropriate manifestations of government economic intervention.
It would be more constructive to envision a continuum of government-market engagement, increasing from left to right in four steps from a “laissez faire, leave it to the market” approach to “supporting factor conditions for innovation (such as education)” (which The Economist endorses, as, certainly, does ITIF) to going further by “supporting key technologies/industries” to at the most extreme “picking specific national champion companies”, that is, “picking winners.” And while it is generally inadvisable for governments to intervene in markets to support specific national champion companies, ITIF believes there is an appropriate role for government in placing strategic bets to support potentially breakthrough nascent technologies and industries.
Ironically, The Economist asserts that, “Industrial policy may be designed to support or restructure old struggling sectors, such as steel or textiles, or to try to construct new industries, such as robotics or nanotechnology. Neither track has met with much success. Governments rarely evaluate the costs and benefits properly.” Yet, seconds later, the authors admit, “America can claim the most important industrial-policy successes, in the early development of the internet and Silicon Valley.” In one sentence, the article glosses over the point that the government, in this case the Defense Advanced Research Projects Agency (DARPA), “supported creation of ARPANET, the predecessor of the Internet, despite a lack of interest from the private sector.” (Italics mine.) But this point, as economists are wont to say, is “non-trivial.” In fact, it is the precisely the point.
Early on, companies were reticent to invest in the nascent field of computer networking because the sums required were enormous and the technology was so far from potential commercialization that companies were unable to foresee how to monetize potential investments. Moreover, such basic research often results in knowledge spillovers, meaning the company cannot capture all the benefits of its R&D investment (in economist’s terms, the social rate of return from R&D is higher than the private rate of return), and thus companies tend to underinvest in R&D to societally optimal levels. Of course, this dynamic pertained not just to the Internet, but applies today to a range of emerging infrastructure technologies such as biotechnology, nanotechnology, robotics, etc. As Greg Tassey, Senior Economist at the National Institute of Standards and Technology (NIST), explains it, “the complex multidisciplinary basis for new technologies demands the availability of technology “platforms” before efficient applied R&D leading to commercial innovation can occur.” In other words, the levels of investment required to research and develop emerging technologies is so great that the private sector cannot support it alone, and thus, “government must increasingly assume the role of partner with industry in managing technology research projects.”
Such was the case with the initial development of the Internet, as government stepped in and provided initial R&D funding, helped coordinate research between the military, universities, and industry, and thus seeded development of a breakthrough digital infrastructure platform, making the Internet a reality decades before the free market ever would have (if ever) if left to its own devices. And this admittedly-successful industrial policy has indeed been a spectacular success. As ITIF documented in a recent report, The Internet Economy 25 Years After.com, the commercial Internet now adds $1.5 trillion to the global economy each year—that’s the equivalent of adding South Korea’s entire economy annually.
Moreover, the list of technologies in which government funding or performance of research and development (R&D) has played a fundamental role in bringing the technology to realization is long and compelling. It includes: the cotton gin, the manufacturing assembly line, the microwave, the calculator, the transistor and semiconductor, the relational database, the laser beam, the graphical user interface, and the global positioning system (GPS), amongst many others. The National Institute of Health (NIH) practically created the biotechnology industry in this country. And yes, even Google, the Web search darling, isn’t a pure-bred creature of the free market; the search algorithm it uses was developed as part of the National Science Foundation (NSF)-funded Digital Library Initiative. (But Google hasn’t done much to spur economic growth!) The point is that companies like IBM, Google, Oracle, Akamai, Hewlett-Packard, and many others may not have even come into existence─and certainly would not have prospered to the extent they have─if the U.S. government was not either an early funder of R&D for the technologies they were developing or a leading procurer of the products they were producing. And if you don’t get Intel developing the semiconductors, or Cisco building out the Internet, or Akamai securing it, or Google making it accessible, then you don’t get the downstream companies like the Amazons or eBays, the latter of which 724,000 Americans rely on as their primary or secondary source of income.
Thus, while governments shouldn’t be creating and running such companies itself—that is for the free market to do—the government has a role to play in thoughtfully, strategically, and intentionally placing strategic bets on nascent and emerging technologies—as the United States did with information and communications technologies in the 1960s and 1970s—that have the potential to turn into the industries, companies, and jobs that drive an economy two to three decades hence. We call this innovation policy, as opposed to industrial policy. Today, this augurs the need for smart policies and investments in industries such as robotics, nanotechnology, clean energy, biotechnology, synthetic biology, high-performance computing, and digital platforms such as the smart grid, intelligent transportation systems, broadband, and Health IT. Explicit in this approach is a recognition that some technologies and industries are in fact more important than others in driving economic growth—that “$100 of potato chips does not equal $100 of computer chips.” Indeed, they are not because some industries, such as semiconductor microprocessors (computer chips) experience very rapid growth and reductions in cost, spark the development of subsequent industries, and increase the productivity of other sectors of the economy—not to mention support higher wage jobs.
Yet The Economist frets that governments aren’t very good at identifying and investing in strategic emerging technologies. In impugning governments’ ability to pick winning technologies, the article cites failures such as France’s Minitel (a case of a country picking a national champion company) and argues that “Even supposed masters of industrial policy {like Japan’s MITI, or Ministry of International Trade and Industry} have made embarrassing mistakes.” But this would be tantamount to pointing to the spectacular failure of Apple’s Newton and arguing that Apple’s no good at innovation. The Economist seems to suggest that if governments failed 80-90% of the time in picking technology winners (and ITIF actually thinks their success rates are much higher), then they must be pretty incompetent at the effort and should stop trying altogether.
But if private corporations followed that advice, then we would have no innovation whatsoever. Indeed, research by Larry Keeley of Doblin, Inc. finds that, in the corporate world, only 4 percent of innovation initiatives meet their internally defined success criteria. More than ninety percent of products fail in the first two years. Other research has found that only 8 percent of innovation projects exceed their expected return on investment, and only 12 percent their cost of capital. Yet companies have to continue to try to innovate, even in the face of these long odds, because research finds that firms that don’t replace at least 10 percent of their revenue stream annually are likely to be out of business within five years. The point is that just because innovation is difficult and success rates are low, this does not mean that corporations, or governments, should quit trying—or that their successes, like the Internet, can’t be spectacularly successful and have a profound impact on driving economic growth.
But The Economist laments that industrial or innovation policies are subject to capture by industries. What this neglects is that all countries, including the United States, already have de facto industrial policies that favor some industries over others. In the United States, for example, our regulatory and tax system favors agribusiness through farm subsidies, the oil industry through oil subsidies, airlines and highways at the expense of rail, and mortgage and financial industries. In fact, it is precisely because the United States has historically lacked an ability, or willingness, to have a clearly defined innovation strategy and an open dialogue about “making strategic decisions about strategic industries” that we’ve ended up with a de facto industrial policy ill-suited to supporting industries that will drive economic growth in the future. The Economist notes that “there is no accepted framework for “vertical” policy, favoring specific sectors or companies.” True. So let’s make one.
Finally, while The Economist criticizes President Obama’s new Strategy for American Innovation (released in 2009), it fails to come up with compelling evidence that breakthroughs such as mapping the human genome, unlocking nanotechnology’s potential, or achieving the technology-enabled transformations that need to occur in sectors from energy to transportation will occur solely because of the market’s ability to allocate capital efficiently. In this, it discounts the need for effective, intentional public-private partnerships to invest in and collaborate in the development and diffusion of these industries and technologies.
This critique is not meant to pick on The Economist, which is usually chock full of solid reporting and informed commentary. Rather it is take on the myth of America’s purely free market capitalist system and make the case for an informed innovation policy. It is also to note that countries (like the United States) find themselves desperately turning to industrial policy in a last ditch effort to save stumbling sectors such as automobiles because they have failed to make adequate investments in innovation policies that would support science and technology, R&D, and the development and diffusion of innovative processes and technologies that could have helped keep old sectors like automobiles at the technology frontier while supporting the development of new sectors to drive the economy forward.
Finally, it seeks to rebut the ideological and highly politicized assault on the idea the governments cannot make prudent, targeted bets on the industries of tomorrow. As Greg Tassey has noted, competition among governments has become a critical factor in determining global market share among nations. Indeed, the role of government is now a critical factor in determining which economies win and which lose in the increasingly intense process of creative destruction.
There are appropriate and inappropriate roles for governments to play in this competition. Supporting education, removing barriers to competition, supporting free and fair global trade, opening countries to high-skill immigration, and targeting strategic R&D investments towards the technologies and industries of the future are appropriate roles for governments to play in this competition. Other government policies, such as mercantilist ones which deny foreign countries’ corporations access to domestic markets, pilfer intellectual property by stealing it outright or making it a condition of market access, creating indigenous or proprietary IT standards, failing to adhere to trade agreements, or directly subsidizing domestic companies or their exports, are illegitimate forms of global economic competition. The United States—and The Economist—must abandon its fanciful, stylized neoclassical notion of a purely free global economic marketplace unfettered by any form of government intervention whatsoever, and recognize that governments play a legitimate and crucial role in shaping the innovation capabilities of national economies. As between corporations, it’s a competition; and, as with companies, the ones that develop the best strategies and skills at fostering, developing, and delivering innovation are the ones most likely to win.
Photo credit: chrismear’s photo stream