Having a government fund basic research does not fully resolve the issue of non- appropriability, since foreign as well as domestic firms may use government-funded basic research.

Government-funded basic research

1. Introduction

In the rapidly changing world of emerging technologies at the dawn of the millenium, dominated by both entrepreneurial activity and powerhouse firms with substantial R&D capacity, it appears paradoxical to focus on the role of governments in emerging technologies. If anything, governments are perceived by many to be a problem, not a solution, to the enhancement of a nation’s technological capabilities, with its power to tax, regulate, and otherwise burden innovation at every turn. This has certainly been the position of Microsoft in its ongoing conflicts with the US Department of Justice (and other antitrust authorities): in the view of Microsoft, the government is using outmoded regulations to stamp out innovation that benefits consumers.

A more considered view of the situation suggests that government policy plays an essential role in determining the rate and direction of growth of innovation, and does so for virtually every country in the world. “Technology policy,” as it is often called, is much debated, often changed, and differs across countries; but every country has one, even if only be default, and its impact is substantial. The following annotated list gives some idea of the pervasiveness of public policy in setting the stage for innovation; they are listed in order of increasing public intervention (some would say “intrusiveness”) into the innovation process.

Institutional Infrastructure Governments can provide legal and public institutions that encourage or discourage innovation. A strong intellectual property regime which carefully balances the need to reward innovators with the need to encourage follow-on inventions is possibly the most important infrastructure for

* Professor, Public Policy and Management Department, Wharton School, University of Pennsylvania, Philadelphia, PA 19104. I wish to thank the Annenberg School’s Public Policy Center for their financial support for this project. I have also benefited from comments by Christiaan Hogendorn, Wharton School, on an earlier draft. I am also indebted to my colleagues at INSEAD’s Business Economics Seminar, Fontainebleau, France, for their comments. Some of the material in this chapter was previously published in the Journal of Law and Public Policy. Internet: faulhaber@wharton.upenn.edu; WWW: rider.wharton.upenn.edu/~faulhabe.

– –

Draft – for comment only 02/04/99

2

inventors of patentable products and processes. An essential part of any intellectual property regime is that rights be enforceable in courts, so that a judiciary which enforces the law without discrimination or corruption is a necessary dimension of infrastructure as well.

Innovation is also fostered in the presence of (i) an educational system that produces skilled workers capable of rapid adoption of new technology; and (ii) a financial system which provides capital over a broad range of firms, from the small and new to the large and established. Special strengths of the US, for example, are (i) its world-leading higher education system and (ii) its deep capital markets, particularly its well-developed venture capital market.

Research Infrastructure Basic research in physics, electronics, microbiology, software and other fundamental disciplines has the economically awkward property that its benefits are non-appropriable: once a theorem or physical principle is known, then it can be used by anyone who knows it. While basic research is essential to progress in developing new technologies for market, few firms are interested in investing in such research, the results of which cannot be appropriated for their benefit alone, but will benefit their competitors as well. Typically, the solution to this has been that governments have invested in basic research and encouraged the results to be widely disseminated by scholarly publication, thus encouraging their use.1,2

Governments have several methods of supporting a research infrastructure. A government laboratory structure that supports scholars and encourages publication is one; in the US, the laboratories at the National Institutes of Health and the Brookhaven particle accelerator are examples. In Japan, the so-called 5th generation computer project of the early 1980s falls into this category; in Europe, the CERN accelerator also fits. In the US, however, funding of academic research is the more important avenue of research support. The National Science Foundation and the National Institutes of Health are the best-known vehicles for this support.

A similar model has been used for government funding of “testbeds,” operating systems designed to test the feasibility of particular technologies. The initial funding of ARPANet in the late 1960s, eventually to become the Internet, was justified as a research testbed for packet switched message networks within a university/research community.

1 This is, of course, the very opposite of commercially exploitable development activities, which are kept proprietary by the developers, at least until patent protection is achieved.

2 Having a government fund basic research does not fully resolve the issue of non- appropriability, since foreign as well as domestic firms may use government-funded basic research.

– –

Draft – for comment only 02/04/99

3

Military Technology Probably the most successful and costly technology policy of the postwar era was direct government funding, both in the US and the former Soviet Union, of technologies relating to defense, particularly aviation/space and electronics/communications. In both countries, defense officials took a highly active role in forcing new technologies for procurement purposes, in order to strengthen national military capabilities. However, some technological developments, particularly in aviation, had spillovers into civilian uses, such as commercial aircraft. While a great deal is often made of such spillovers, there is little evidence that such spillovers have been widespread. Specific spillovers at specific times in specific industries have been important, of course, but the effect is much less than is often believed.

Government Directives This model is more interventionist, in that governments take a direct role in encouraging or protecting the commercial exploitation of well-understood technologies, but do not directly fund it. Examples here would be the assurance of low-cost capital to Korean firms in the late 1980s to build up their microchip manufacturing capabilities, the US-Japan Chip Agreement (1986 ff.) which attempted to ensure a market for US DRAM manufacturers, and similar protectionist policies in some European countries targeted to US or Japanese high-technology products.3

In some cases, governments may have a role in standard setting; this has occurred most recently in the US with the establishment by the FCC of a standard for High Definition TV. However, in most emerging industries (such as the personal computer industry in the 1970s and 1980s), standards typically are set by the market (possibly a dominant firm or a patent holder) rather than through government intervention.

Innovation may also be affected by government regulation. The Food and Drug Administration is the prime example; all new medications offered for sale in the US must be approved by the FDA. As a consequence, much drug research is structured around achieving FDA approval for the resulting medications. In other technologies, the influence of government regulation appears to be less.

Government Subsidies This model is perhaps the most interventionist of all, in which governments explicitly attempt to “pick winners.” Examples include the French Minitel of the early 1980s (discussed in more detail below), the European Airbus Consortium, and the US support of Sematech. In each case, there was

3 Many countries have adopted “industrial policies” designed to create advantages for domestic industries at the expense of foreign firms. These industrial policies often have a technology component to them, but are not explicitly a technology policy, and so are not discussed here. These policies have somewhat fallen from favor as a result of their complete failures in countries such as Brazil and India; more sophisticated industrial policies, often given credit for the “East Asian Miracle,” have had somewhat less appeal in the wake of the recent financial crises in a number of East Asian countries.

– –

Draft – for comment only 02/04/99

4

not pretence that the government was supporting research; rather the government was supporting a technology commercial rollout via specific firms.

This list suggests that the role of government in the innovation process is very substantial, and certainly more than can be covered in a single chapter of a book. To graphically illustrate the complex role that governments play in the innovation process, I use the emergence of the Internet as an extended example of how this role plays out in practice.

The development of the Internet can usefully be divided into three phases: (i) the early years, 1970-1993; (ii) the high-growth years, 1994-present; and (iii) the future, in which the deployment of national, perhaps global, broadband networks is likely to play a key role. During each of thee phases, the role of the government is critical but quite different in each period. In examining the public role in each phase, we can gain an appreciation for the good or ill that can be wrought by government actions.

It should be noted that up until quite recently, the Internet has been almost exclusively a US phenomenon. Even today, Internet use is much more prevalent in the US than in any other country. However, this is rapidly changing, and we may expect the future Internet to be much less US-focused than in the past. If my account of the history of the Internet appears US-centric, that is because the Internet itself has, until recently, been US-centric.

2. Internet: the Early Years

The origins of the Internet come from the needs of military research in the 1960s, when the Defense Department’s (DoD) Advanced Research Projects Agency (ARPA) funded the first quite primitive packet-switched network, connecting a small number of universities and laboratories performing DoD research. At the time, the dominant, indeed only, architecture for communications networks was circuit switching, in which each telephone call was assigned a specific set of network facilities (switch points, transmission lines, etc.) for the duration of the call that were completely committed to that call. After the call was completed, those facilities could be used for other calls that used this public network. However, during the call, a continuous connection was maintained, so that a physical or electrical path existed between the two conversing telephones. At the time, such systems used fixed network routing algorithms, so that for example all calls from New York to Los Angeles went by way of Chicago (e.g.), with the St. Louis route as a backup. For military purposes, fixed network routing algorithms had the disadvantage that disruptions by enemy action of a small number of network nodes could separate the network, so that large sections of the network could not communicate with others; such networks were not “survivable.”

Some engineers recommended a (at the time) radical solution of packet switching, in which a message was broken up into packets (or “datagrams”) of digitally encoded information and sent into the network to a nearby switch (or server) that was ready to receive it. That switch then attempted to send it on to the next switch in the general

– –

Draft – for comment only 02/04/99

5

direction of the datagram’s destination, and so forth. Eventually, all the packets that constituted the message were put back together at the destination and conveyed to the receiving end. No continuous path was devoted to the message; rather the packets made their way through the network along sometimes separated paths, and forwarded onward. What made the system survivable is that any switch could process any packet, and it would be automatically routed around disrupted switches. Network routing in this architecture is adaptive, not fixed. Thus, it was potentially well suited to military needs.

But would it work? While the theory looked fine, only a live test could demonstrate how well it functioned in the real world, and the DoD funded the initial ARPANet in 1969 linking four sites: UCLA, SRI, UC Santa Barbara, and University of Utah. These government/academic researchers used this quite primitive net for further research in network protocols, and eventually e-mail and file transfers among one another.

Within the scientific and engineering community exposed to ARPANet, it was an instant success, creating a surprisingly large demand for this service at other universities and research institutes, eventually spreading to other countries. The role of the government in setting up ARPANet and permitting it to expand fits into the “testbed” model of supporting research. By 1975, the Defense Department had all the data it needed that packet switching worked as promised and that it could scale to a reasonable size. By 1975, the ARPANet experiment was stable, and ARPA turned operational control over its network to the US Defense Communications Agency. By the 1980s, ARPANet had grown substantially and was split between MILNet, which handled military DoD traffic, and ARPANet, which supported the advance research component. Meanwhile, the National Science Foundation began the organization of the NSFNet, a network for the conduct of research among universities around the country. This network became the mainstay of research networking in the US in the late 1980s, and in 1990 ARPANet was shut down and all traffic moved to the NSFNet. 4 Additionally, university consortia developed “mid-level networks” which interconnected geographical clusters of net sites and managed this localized traffic. NSFNet was the “backbone” network connecting sub-networks around the country and eventually the world. Together, these interconnected networks became known as the Internet.

The basic packet-switching technology that enabled this network was built upon by new services such as File Transfer Protocol (FTP), Gopher and WAIS (methods for accessing remote files), and Archie and Veronica (early search engines). Clearly, the existence and power of the Internet spawned a number of new technologies to enhance its power, even in these “pre-Web” days.

Scientists relied on this global connectivity for a variety of research purposes. Earth scientists could run remote seismographic stations in the Andes Mountains from their

4 This early history is distilled from: http://www.cs.washington.edu/homes/lazowska/cra/networks.html, a short historical piece written by Vinton Cerf, the “founding father”of the Internet.

– –

Draft – for comment only 02/04/99

6

desktops, microbiologists could swap datasets of experimental results with their colleagues in California, Italy, even the Soviet Union, and mathematicians could collaborate on papers with co-authors in Japan and India over the net. During this period, many students also learned to use the network and when they graduated wanted to continue to have access to it. In fact, a network culture developed around the Internet; sharing was the rule, everything was free, and charging for anything was not only forbidden by the administrators of this academic experience, it was universally vilified by all “netizens.”

The growth of the Internet continued at a very high rate, but still within the scientific and academic world. Most Internet sites were at educational institutions, some at government agencies, and a very few at private firms, generally engineering-oriented or producers of Internet-related products. The culture remained virulently anti- commercial, focused on free provision of information and software. It was characterized at the time as the “gift economy.” For example, the X-Windows system for Unix operating systems, developed and maintained at MIT, was distributed free on the Internet, as were periodic updates. While the network and the network community grew very rapidly in the late 1980s and early 1990s, this phenomenon was still largely invisible to the general public and to major corporations. For many “netizens,” protective of their “gift economy,” this invisibility was both desired and nurtured. However, it was not to last.

3. The Internet Comes of Age

Three major developments in the early 1990s changed forever the course of the Internet:

First, in the early 1990s, NSF decided that the Internet should be privatized. It notified the mid-level networks of its intent to exit the business, and suggested that the midlevels should migrate to a for-profit model. It also began plans to phase out NSFNet in favor of a private solution to the backbone network. This announcement was met with strong resistance from the Internet community, which perceived the publicly funded “gift economy” disappearing,5 a perception that was largely correct. This privatization was completed when the NSFNet ceased to exist in 1995.

Second, in 1989 a computer scientist working at the CERN Laboratory in Switzerland invented the standards and protocols that constitute the World Wide Web (WWW), a method for accessing data of all forms world wide with a unique addressing structure. Previous methods for achieving this, such as gopher and WAIS, were much less elegant and easy to use. However, it was not until a team of programmers at the National Center for Supercomputing Applications (NCSA) designed a graphic “browser” that worked with PCs that a rich and easily accessible means of using the WWW was

5 See, for example, Kahin, B. (ed.), 1992, Building Information Infrastructure, New York: McGraw- Hill for early discussions and concerns about the (at the time) coming privatization of the NSFNet backbone network.

– –

Draft – for comment only 02/04/99

7

available for the non-scientist PC user. Both CERN and NCSA are publicly funded research centers.

Third, by 1993 over 30% of US households owned personal computers, generally with a Windows or Macintosh interface. 6

While we cannot say that these developments “caused” the recent popularity and growth of the Internet, it is reasonably clear that these developments were necessary conditions to its growth. Of these developments, two of the three were initiatives from public agencies.

In retrospect, it is clear that the testbed model of government support of the Internet was no longer applicable by the early 1990s. The Internet was not the “testbed” engineering experiment it had been in the 1970’s. It was now an operating entity with a wide user base. NSF was quite correct in moving the ownership and management of the Internet out of the public sector and into the private sector. However, in doing so it incurred the wrath of many in its natural constituency of scientific researchers. We can thus draw our first lessons from the Internet experience:

Lesson 1: The appropriate locus for government intervention/support of technology is at the earliest research stage.

Implications for Managers: Monitoring of research in your field at universities and government labs can be a good early-warning indicator of what opportunities lie ahead, and may give you the chance to anticipate and plan for change.

Lesson 2: Withdrawal of government support for a research effort as it gets closer to commercialization is strongly resisted by the beneficiaries of that support (be they universities, scientists, not-for-profits, or private-sector firms).

Implications for Managers: If you are a beneficiary of government support, lobby hard for continued (or new) subsidies. If you are a firm taking advantage of this emerging technology, position yourself as a “safe haven” for potentially valuable players about to lose government support.

Despite the predictions of many academic users, the switch to a privatized network proceeded smoothly and seamlessly (not without enormous effort on the part of many service providers). The phenomenal growth of both Internet volume and Internet hosts continued apace; Figures 1 and 2 illustrate the historical growth through 1995 and its projection into the future.

6 Freeman, Andrew, 1996, “Technology in Finance Survey,” Economist, October 26. As of this writing, about 45% of US households have PCs and about 1 in 4 people are online. Household penetration in Europe is significantly lower; about 1 in 20 people are online in France, and about 1 in 10 in the United Kingdom.

– –

Draft – for comment only 02/04/99

8

TRAFFIC ON THE NSF BACKBONE7 NUMBER OF INTERNET HOSTS

Jan.93 Jan.94 Jan.95 Jan.96 Jan.97 Jan.98 Jan.99 Jan.00 1.E+08

1.E+09

1.E+10

1.E+11

1.E+12

1.E+13

1.E+14

1.E+15

1.E+16

1.E+17

1.E+18

1.E+19

1.E+20

Jan.93 Jan.94 Jan.95 Jan.96 Jan.97 Jan.98 Jan.99 Jan.00

Bytes per Month

Historical Values

Projected Values

10 ExaBytes/Month

N O W

100,000

1,000,000

10,000,000

100,000,000

1,000,000,000

Jan.90 Jan.91 Jan.92 Jan.93 Jan.94 Jan.95 Jan.96 Jan.97 Jan.98 Jan.99 Jan.00 Jan.01

Projected Values

hosts

Historical Values

N O W

187 Million Internet Hosts

Figure 1 Figure 2

It might be thought that this astounding growth would have caught the eye of corporate America, particularly in light of the recent privatization of this extraordinary resource. But Corporate America’s eye was elsewhere. During 1993, the buzzword was “multimedia,” a catch phrase that included video-on-demand and other entertainment options. Several very large mergers were proposed, the most publicized being the Bell Atlantic-TCI deal, only some of which were consummated.8 These mergers were predicated, in part, on the future market potential of broadband network entertainment delivery systems.9 At the time, the Internet and WWW were still perceived by most communications, entertainment, and software firms as at best a predecessor, and at worst a distraction, to the true “Information Superhighway.”

Nor was their skepticism unwarranted; having seen several “false dawns,” the unruly hackers’ paradise of the Internet hardly looked like the engine of commerce and entertainment that large corporations envisioned as the Information Superhighway. However, this skepticism may also be seen as a missed opportunity; for example, if Barnes and Noble had been alert to the challenge of the Internet early on, it may have been able to preempt Amazon.com.

By 1994, the sustained growth of the Internet attracted more and more users and corporations. The number of “.com” sites (indicating a commercial user) exceeded the number of “.edu” sites (indicating an educational user) for the first time in Internet

7 NSFNet architecture was phased out by April, 1995; no comparable usage statistics are now available. These usage numbers do not reflect total Internet bytes, but only those traversing the NSFNet backbone. Tables  A.M. Rutkowski and the Internet Society.

8 See Korporaal, Glenda, 1993, “Baby Bell’s $90bn Mother of All Mergers”, Financial Review, 14 October, for an account of the merger announcement. See Ian Scales, “Irreconcilable Differences?” Communications International, December 1994, for an account of the failure of the merger negotiations.

9 Barrett, Andrew C., 1993, “Shifting Foundations: The Regulation of Telecommunications in an Era of Change “Federal Communications Law Journal, 46(1) December.

– –

Draft – for comment only 02/04/99

9

history.10 Both total traffic and total number of hosts on the Internet exploded during 1994. Nevertheless, the Internet continued to be viewed by most large corporations throughout 1994 and 1995 as something of a fad, the “oat bran muffin of the 1990s.”

The change in corporate perception of the Internet is most dramatically illustrated by the sequence of events in 1995 by Microsoft, the most powerful software firm in the world. The highly publicized launch of Windows 95 by Microsoft in August 1995 also introduced the Microsoft Network (MSN), the firm’s much anticipated entrée into on- line services. Apparently, the software giant anticipated that it could brush aside the “cowboy” Internet simply by placing an MSN icon on its Windows 95 desktop. However, Microsoft’s early experience with MSN coupled with their assessment of the traditional on-line services market apparently was not entirely satisfactory; in December, Microsoft announced a major shift in strategy which would focus its considerable resources on the Internet. This acknowledgment by the most influential software firm in the world, that it was more profitable to cooperate on the Internet than to compete with it, marked a turning point in both public and corporate perceptions of the future of the Internet. This was no longer seen as another “false dawn;” the ‘Net appeared to be here to stay. By 1996, Microsoft had announced plans “…to eliminate proprietary interfaces altogether and move entirely to Web-based content.”11

The years since then have seen an explosion of new businesses, Internet startups, new ways of doing business, e-commerce, and many other Internet-related activities undreamed of prior to 1995. Internet service providers (ISPs) now cover virtually the entire US, approximately 45% of US households have a PC, and about 23% of US households have an account with an ISP. Clearly, the old model of the Internet as an academic/scientific research tool has been entirely supplanted by the new model of an information infrastructure growing rapidly eventually to reach most of the US and much of the world’s population.

With this transformation has come several challenges to the government’s role in this technology. In each case, the government has struggled to meet the challenge with greater or lesser success.

Internet Governance and Domain Names Until now, the Internet has been “run” by a loose confederation of Internet professionals in the Internet Society and the Internet Engineering Task Force. In the academic/scientific research model, this volunteerism of dedicated professionals worked extremely well; in the highly commercialized, international infrastructure model, this governance structure is no longer appropriate. The US government, acting in consultation with other national governments and the Internet community, has attempted to aid in the transition to a more private-sector, international governance structure. This new not-for-profit organization, the Internet Corporation for Assigned Names and

10 Lottor, Mark K., 1994, Internet Domain Survey, Network Wizards, Inc., October.

11 Spangler, Todd, 1996, “The Net Grows Wider: Internet Services,” PC Magazine, 15(20), November 19.

– –

Draft – for comment only 02/04/99

10

Numbers (ICANN), will oversee, among other things, the re-design of the domain name system. This system, for years run by the legendary (and recently deceased) Jon Postel, an academic from Southern California, who was the very antithesis of the new Internet commercial culture. As of this writing, who and how domain names will be assigned is still being thrashed out among ICANN, various governments, and Network Solutions, Inc., the domain name contractor for the last few years. This saga is not without its lesson.

Lesson 3: Government as coordinator can help manage the transition from public (research and education) to private (commercial). Everyone will complain.

Implications for Managers: Periods of transition are wonderful opportunities to gain a lasting competitive advantage; lobby hard and lobby smart to have your standard adopted (or at least avoid having others’ standards adopted).

Social and international consequences The global ubiquity of the Internet and the ease with which anyone, including young children, can access its power and breadth of content have caused concern in several areas:

Sexually explicit material The availability of sexually explicit material on the Internet for children has been a cause for concern to parents, many of whom were less competent at navigating the Web than their children. The Communications Decency Act, which became an amendment to the Telecommunications Act of 1996 (and subsequently struck down by the courts) was a political response to that widespread concern. While First Amendment advocates in the US attacked the amendment, many see this as a legitimate concern that needs to be addressed in some fashion, and that this has not yet happened.

Hate groups and “dangerous” information The World Wide Web has become a vehicle for many special interest groups, including some that have been characterized as “hate groups;” neo-Nazis, white supremacists, and extremist militias. Often, the websites of such groups publish recipes for making bombs and other information thought by many to be dangerous. While it has always been possible for such groups in the US to exercise their First Amendment rights, the ease with which the Internet makes this expression available to all is unsettling to many. There is political pressure to “do something” about this problem, particularly in the wake of the 1996 bombing of the Murrah Federal building in Oklahoma City.

Differing cultural impacts of web content While the above material causes some concern within US society, its impact on other cultures is far more

– –

Draft – for comment only 02/04/99

11

severe.12 In Islamic countries, the level of sexually explicit material available on the web far exceeds socially acceptable levels, and yet their societies have no way to control websites in other countries, thus limiting access to such sites. In Germany, there are strict laws concerning the distribution of neo-Nazi materials, and yet such materials are readily available from US-based websites. In some cases, the German authorities have taken action against local ISPs through whose facilities this material was accessed. Most would agree that this is not the best solution to this problem, and yet the political demand for solutions is quite strong.

In each case, governments are called upon to respond to what many see as the new technology’s challenge to social mores. These examples illustrate that political solutions are often sought to limit these impacts, perhaps with unintended consequences.

Lesson 4: Governments will respond to political demands from constituents who perceive a threat from the new technology.

Implications for Managers: Be prepared both offensively and defensively; political demands can be a wonderful profit opportunity if you are prepared to capitalize early. For example, ISPs which provide software that permit parents to screen their children’s web activity were well- positioned for profitable sales when children’s access to salacious material online became a public issue.

Economic, legal, and strategic impacts The ability of the web to make information available anywhere instantaneously has been a copyright holders’ nightmare. Those who place original content on the web, or whose content is electronically copied and placed on the web, view the Internet as one huge copy machine. This phenomenon is not new; the advent of cheap duplicators in the 1960s caused a similar concern. There is no question that the Internet will certainly change how intellectual property, especially copyrighted material, is distributed

12 For an interesting legal perspective on controlling pornography in different countries, see Dawn A. Edick, “Regulation of Pornography on the Internet in the United States and the United Kingdom: A Comparative Analysis,” Boston College International and Comparative Law Review, 21, Summer, 1998.

– –

Draft – for comment only 02/04/99

12

and protected. Current institutions will have to change, 13 and there will be winners and losers from that change.14

The Internet also can provide very secure means of transmitting sensitive information via encryption. In fact, “public-key” encryption methods provide virtually everyone with access to unbreakable codes. While this is very useful for doing online banking, it makes police work, combating terrorists, and other law enforcement and defense efforts much more difficult. Attempts by the US government to limit the export of encryption technologies have been met with derision by both the Internet community and the commercial interests for whom security is essential for their e-commerce.

In fact, electronic commerce on the Internet has spawned a host of legal problems concerning how commercial law applies to e-transactions. How do fraud laws apply? Are “digital signatures” valid? What privacy rights do consumers have when they engage voluntarily in e-commerce? Does the owner of a website have the ability to restrict “unwelcome” links from other websites? This emerging technology has forced legal scholars and practitioners alike to confront new issues in commercial law.15

Lesson 5: Both commercial and governmental interests will seek a legal/political response to disruptions created by the technology to their way of doing business. Such responses may evoke disruptions to other parties, thus propagating further demands for legal/political response.

Implications for Managers: foreseeing all the implications of changing technology is difficult but a necessary part of doing business today and in the next century. Anticipate both corporate and market disruptions and plan how to take advantage of the profitable opportunities that result. Legal/political responses to disruptions will offer great opportunities to those who can influence the political demands and responses, and provide profitable support for those firms who suffer disruption.

4. The Internet of the Future

Just as the Internet of today looks nothing like the Internet of ten years ago, so the Internet of the new millenium will be totally different than today’s. However, we can

13 For a primer on intellectual property law and the Internet, see the several articles on this subject in Berkeley Technology Law Journal, 13,1998.

14 An interesting example of both piracy, adaptation to piracy, and innovation in electronic distribution, all caused by the web, is the digital distribution of music. See “The Music Industry: A Note of Fear,” The Economist, October 31, 1998, p. 67.

15 See the special issue of the South Carolina Law Review, 49, Summer 1998, which features a number of articles discussing these electronic commerce concerns.

– –

Draft – for comment only 02/04/99

13

see at least some ways in which the Internet will evolve. Most important among these is the strong long-term demand for increased bandwidth.

“Bandwidth” refers to the amount of information per unit time that can be processed or transmitted through an electronic medium, such as a computer, a transmission pipe, or a router. In information transmission, a telephone line is narrowband; unaided, the typical telephone line can handle about 10 Kbps (kilobits per second). Using data compression technology, this can be boosted near 50 Kbps. By comparison, a broadband video channel, such as broadcast TV, is over 5 Mbps (megabits per second), since the information content of moving pictures is much higher than speech. In most offices today, computer network connections are broadband 10 Mbps ethernet connections. By contrast, most homes use a telephone line to access the Internet, moving at the much slower speeds associated with the narrowband channel. This difference is at least in part responsible for the frustrating wait that often occurs when accessing the web from home: the “World Wide Wait.” In fact, a strong restriction on web designers has been that most users access a website using a narrowband connection, so only limited graphics are feasible, and certainly nothing as bandwidth- consumptive as video should be sent over the web (even though this is technically feasible).

The next technology to emerge in the Internet lifecycle is the national and eventually global diffusion of ubiquitous broadband16 networks. Access to such networks would provide households and businesses with a far richer interactive experience, including two-way video, online demos, and 3D visualizations. Whether this will occur, and how it will play out, appear as great uncertainties.

In fact, networks are nothing new. “Hard” networks, such as road and rail systems, power grids, water and gas distribution networks have been with us for a century. These networks connect customers to suppliers (or other customers) with physical facilities. “Soft” networks, such as computer hardware and software, and automobile service and parts systems, depend upon shared standards and protocols to link products and their uses and are a barely noticed part of our lives. Telecommunications networks have also been with us for a century, from early telephone networks, local in scope, to the emergence of the current globally connected telephone system. In the 1920s, radio networks emerged, followed by television networks in the 1940s and 1950s. Somewhat later, cable television networks grew, slowly at first, but now passing over 90% of US homes. In other countries, satellite TV distribution networks performs much the same role. More recently, cellular telephone networks have also grown, illustrating the point that telecommunications networks, though “hard” in the sense used above, can be wireless links, without a continuous physical connection.

16 I use the term “broadband” to refer to an electronic signal (or the facilities designed to transmit that signal) carrying information substantially greater than voice, such as video or high-speed data. For the more engineering oriented, I consider ISDN to be more than voice but less than broadband. Generally, a useful if not wholly accurate benchmark would be signals of 10 MHz or above. Note that modern compression technologies may eventually permit the practical carriage of such signals across telephone lines originally designed for voice.

– –

Draft – for comment only 02/04/99

14

In this broader network context, why the sudden interest now in broadband networks, and what is unique about them? For those familiar with this technology, the surprise is that it took so long. Engineers and communications specialists have been predicting the coming of broadband systems with both confidence and regularity over the last thirty years. There have been numerous “false dawns,” such as teletext and videotext, and more successfully, Minitel in France. However, despite the enthusiasm of engineers and telephone companies, consumers did not have a question to which broadband data networks were the answer.

But given the rich context of existing telecommunications networks, what is so special and unique about broadband data networks? The fact that they are broadband is nothing special; coaxial cable and broadcast TV are broadband. However, both these media are inherently one-way: they are designed to carry video content from a producer of that content to customers of it. Recent attempts to re-fit cable systems for two-way traffic, though successful, reinforce the point that this system was designed to deliver a specific product, and attempts to modify it are quite costly. These are specialized systems. The fact that broadband networks are interactive is also nothing special; the telephone network has been two-way for a hundred years. But again, this is a network designed to deliver a specific product, and that is two-way simultaneous voice; and it will not be easily modified to do much else. This too is a specialized system. What is special is that broadband data networks are both broadband and interactive, and it is this conjunction of attributes that creates the power broadband networks: just about any electronic signal can be sent from anybody to anybody else. Rather than the design of the network tying it to a specific purpose, it is a general system, with the potential for its use to be shaped and tailored by the needs and desires of its users.

In sum: it now appears that the long-anticipated mass deployment of broadband data networks is at hand, with the Internet and WWW forming the basis of this growth. How fast this will occur, what fraction of households, businesses, schools, and governments will eventually become active users, what technologies will be used, and what they will be used for; all are subject to great uncertainty. There is a very wide range of possibilities, from “small impact on a few enthusiasts” to “a fundamental change in the way we all live and work.”

However, which route is taken, and how fast it develops, will almost surely be deeply affected by public policy decisions being made now regarding government involvement in infrastructure development, either via direct encouragement, even investment, or via regulation, possibly with universal service mandates.

5. Emerging Technology and Infrastructure: Public Policy Concerns

Often, an emerging technology depends upon the availability of a supporting infrastructure. For example, cellular (and PCS) telephony depends upon a network of radio towers, allocation of electromagnetic spectrum, and the existing terrestrial telephone network. Another example is the development of genetically engineered

– –

Draft – for comment only 02/04/99

15

drugs, which depends upon the distribution and knowledge infrastructure of existing physicians and the manufacturing capacity of traditional pharmaceutical firms. In some cases (such as the cellular telephony example) government plays a very active role in either developing the infrastructure directly or regulating its operation. The supporting infrastructure of the emerging technology (if required) may thus be a public policy issue).

In the case of the Internet, the supporting infrastructure of the future is a ubiquitous broadband electronic network. What are the public policy issues associated with electronic network infrastructure? Generally, the economic issues that draw governmental attention are: (i) is the service available and affordable to all citizens? This is generally referred to as “universal service;” (ii) is the service efficiently provided at a reasonable quality? This is generally referred to as “quality of service;” (iii) is the provider earning excess profits from abuse of a monopoly market position? and (iv) is the distribution system available to all content providers? I consider each in turn.

The Problem of Universal Service

Implicit in the concept of infrastructure is that it must serve most of the population. For example, over 94% of households have telephones, over 98% of households have television, over 90% of households are passed by cable, about two-thirds of which subscribe to the service,17 and about 45% of households have personal computers.18 Most every desk and workstation in US industry has a computer on it, and almost all the growth is now coming from sales to homes, where growth rates are still high. Other related industries have also achieved relative ubiquity, such as VCRs.19

Each of these industries has arguably achieved, or is about to achieve, “universal service;” those customers who want the service are generally able to afford it. And yet the routes they followed to achieve universal service are quite different. In the cases of television, VCRs, and personal computers, competitive markets drove prices down and market penetration up. In the case of cable TV, the laying of cable in all neighborhoods was generally a condition of the franchise that granted each company a geographic monopoly on wireline video delivery. In the case of telephone, universal service was an objective of both the old Bell System and its regulators since the early years of this

17 See Faulhaber, Gerald R., 1994, “Public Policy In Telecommunications: The Third Revolution,” Information Economics and Policy, 7, for supporting material.

18 See Freeman, Andrew, ibid.

19 Even cellular telephone, once thought to be a product targeted to wealthy stockbrokers phoning in buy and sell orders from their BMWs, has achieved a market penetration substantially beyond that originally predicted. In 1995, the cellular market grew by 36% to 32 million subscribers (compared to 145 million landline telephone subscribers) (McCall, Tom, 1996, “US Cellular Market Exhibits Solid Growth”, DataQuest Interactive, March 25). Today, it is as likely that the person using a cellular phone next to in a traffic jam is driving a pickup truck as a BMW.

– –

Draft – for comment only 02/04/99

16

century, not to be realized until about 1960. In cable and telephone, universal service was an explicit public policy objective, but different policy instruments were used to achieve it. In telephone, active regulation was the chosen instrument; in cable, the contract terms of the franchise was the chosen instrument.

The Price of Mandated Universal Service In both cable and telephone, however, the price of publicly mandated universal service was monopoly. In order to make it feasible (so it was claimed) for a firm to serve everyone, profitable and unprofitable, the government had to forbid entry by competitors into the firm’s market area. Why should this be? The universal service mandate of regulators has traditionally gone beyond ensuring that service is available to all; it is rather than service should be affordable by all. In order to achieve this objective, regulators have traditionally insisted on pricing practices that involve cross-subsidies:

• Prices for service should be the same for all (or based on simple criteria such as distance), regardless of cost. For example, telephone service in rural areas where it is more costly to provide is priced no higher than service in suburban areas where it is less costly to provide. Long-distance telephone service rates depend only upon distance between the two parties having the conversation, whether the call uses very expensive sparse routes across rugged terrain or relatively cheap dense routes across a plain.

• Prices for basic services, such as telephone access or basic-tier cable TV, are often subsidized by “premium” services, such as long-distance and international telephone or premium cable channels, in order that they be “affordable,” especially for the poor.

It should be noted that these pricing practices are not unique to the US, but occur in publicly regulated or publicly owned networks throughout the world.

However, such practices cannot be sustained in the presence of competitive entry. New firms could enter only those markets in which prices are held above cost in order to subsidize other customers, forcing incumbents to respond competitively with price decreases or lose the business altogether. In either case, the source of internal subsidy would eventually disappear, and the incumbent could no longer afford to serve unprofitable (though allegedly deserving) customers. Therefore, in order to maintain the subsidies that most regulators use to achieve universal service, regulators restrict competitive entry, either by regulatory fiat or by the granting of franchise monopoly.

The costs of regulated monopoly have been well-documented elsewhere,20 including reduced incentives for efficient operation, reduced incentives for innovation, excessive resources devoted to “rent-seeking” through the regulatory process, and so forth. With an emerging technology, the threat that the industry could be regulated could well be

20 For an early reference (among many others), see Brauetigam, Ronald, and Bruce Owen, 1978, The Regulation Game: Strategic Use of the Administrative Process, Cambridge, MA: Ballinger Publishing.

– –

Draft – for comment only 02/04/99

17

enough to stifle the whole enterprise. Yet some very pro-competive commentators from the computer industry have suggested that computers should be made available to all, perhaps without realizing the economic and regulatory consequences of this advice. There is no question that paying the price of monopoly is quite high; is it really necessary in order to achieve universal service?

As a matter of logic, a public policy of universal service need not necessarily lead to franchise monopoly. For example, government could provide direct subsidies to low- income (or high-cost) consumers, perhaps in the form of computer stamps or Internet stamps. Alternatively, non-exclusive franchises could be granted which required that each franchisee provide universal service, but let multiple franchisees compete for consumers’ favor. For example, two or three broadband network providers could compete in a metropolitan area, all under the requirement to provide universal service.21

The Problem of Quality of Service

In a market with some form of competition, the expectation is that quality of service will take care of itself. Firms will provide the level of quality that customers demand and are willing to pay for, and competition will ensure their responsiveness to customers. In the case of monopoly, however, the incentives for the firm to provide appropriate quality levels may be diminished, so that quality of service may suffer. The most salient examples of quality of service problems in these network industries have occurred in the more monopolistic industries: cable TV and the Internet.

The recent congestion on the Internet is less a problem of monopoly than it is of growth outstripping the Internet’s governance structure. WWW users have been experiencing agonizingly slow download times from graphics-intensive web servers, and the delays from the US to European or Asian sites are extremely long. Since the network uses shared resources, increased demands cause those shared resources to become congested. Management of this situation is at once everyone’s problem and no one’s problem; both demand and supply of all network components, not just a few, must be managed to solve this quality of service problem.

The decline of service quality of another kind was observed by many customers of the cable TV industry during the late 1980s, and gave rise to demands on Congress for a solution. In the event, that solution was the Cable Re-regulation Act of 1992. However, the political demand grew out of what many customers perceived as shoddy treatment in handling requests and complaints and failure to provide reliable, outage-free services.

21 In the case of cable television, potential cable operators claimed that they would only undertake cable investments if they were granted exclusive franchises. Very few municipalities chose to test this claim. Several researchers suggest that many municipalities could indeed support more than one cable system. See, for example, Hazlett, Tom, 1990, “Duopolistic competition in cable television,” Yale Journal of Regulation, and Levin, Stanford, and John Meisel, 1991, “Cable television and competition,” Telecommunications Policy.

– –

Draft – for comment only 02/04/99

18

In principle, regulators generally have the legal power to coerce firms to provide the “right” service level. In practice, this is more difficult, as is borne out in Williamson’s well-known analysis of cable TV franchise bidding.22 Additionally, it is not clear that regulators are good at assessing the quality level that customers would demand in a more competitive market. For example, in the pre-competitive airline market, most scholars agree that airlines over-provided schedule quality, at the cost of higher fares, as a result of the CAB’s regulatory practices. After deregulation, schedule quality “deteriorated” to that for which customers were willing to pay. Another example occurred in telephone; prior to the deregulation of terminal equipment, the Bell System (with regulatory approval) provided rather simple telephones that were virtually indestructible. After deregulation, it became clear that most customers preferred telephones with many more features and a shorter life; the telephone soon became another consumer electronics product. In both cases, regulation led to an inappropriate quality level (as measured against the competitive standard).

The Problem of Monopoly

For some emerging technologies, “bottlenecks” develop in which a single firm controls consumer access to this technology. In cases where an emerging technology depends upon a network infrastructure, this infrastructure may be a “natural monopoly,” in which competitive markets would naturally lead to a single supplier as the most efficient alternative. This infrastructure could be a true network, such as the telephone network or some broadband network, or a common interface, such as the Windows operating system. In such cases, antitrust actions to break up a monopoly would be ineffective, as market forces would eventually lead to the re-monopolization of the industry. Some form of regulation may be justified as a means to control the abuse of monopoly power in such industries, and this is the rationale given by many for the creation of regulated monopolies in network industries. Others argue that these monopolies may not be so natural, but are in fact products of the regulation that seeks to control them. This latter view is somewhat more compelling, in that virtually all regulators protect regulated monopolies with entry prohibitions. In the words of Alfred Kahn, “If the monopoly is so natural, why does it have to be protected?”23 In fact, the protection is necessary to maintain subsidizing price structures, which are indeed a product of regulation. In any case, regulators find that control of monopoly power is added to their list of responsibilities, be that monopoly natural or created. Generally, much regulatory attention is devoted to determining if a firm is abusing its market power. In the classic regulated monopoly, this concern takes the form of ensuring that the firm’s earnings are not “excessive,” that is, exceed the cost of capital. In regulated monopolies operating in some markets subject to competition, this concern takes the form of ensuring that power in monopoly markets is not being used to subsidize operations in competitive markets. Both tasks are extremely difficult, but concern for

22 Williamson, Oliver, 1976, “Franchise Bidding for Natural Monopolies – In General and with Respect to CATV,” Bell Journal of Economics, 7(1).

23 Kahn, Alfred, 1970, The Economics of Regulation, New York: John Wiley & Sons.

– –

Draft – for comment only 02/04/99

19

cross-subsidy is virtually impossible. For example, as telecommunications competition slowly increased during the 1970s and early 1980s, the Federal Communication Commission devoted very substantial efforts to develop a standard by which to judge whether or not Bell System rates involved cross-subsidy, without success.

The Problem of Vertical Integration (Content vs. Conduit)

The “network” of a network industry is a distribution system, a conduit over which something else, content, is sent. In telecommunications, this something is telephone calls; in cable, it is video programming; in electric utilities, it is power. In computing, it is possible to think of hardware as conduit and software (which actually delivers what customers want) as content. In both regulated and competitive markets, an important economic issue is the vertical integration of content and conduit.

In some markets such as telephone, content and conduit are separated as a matter of law, generally on First Amendment grounds. In other related markets such as cable and broadcast television, content and conduit can and generally are integrated24 within each firm. For example, subscribers to a particular cable firm can only buy material that the cable firm chooses to make available. In contrast, anyone can use the telephone network to distribute any information (such as 800 or 900 services); the telephone company has nothing to say about it.

The computer industry provides a prime example of how competitive markets evolve. Prior to the early 1980s, virtually all computer companies bundled hardware and software together. An IBM customer had to buy IBM proprietary software, because no other commercially available software ran on IBM machines. This was the era of “closed” computer architecture. In contrast, the PC ushered in the era of “open” architecture, in which hardware vendors encouraged provision of software by as many as possible. The result was a flowering of both hardware and software, with thousands of companies, many no more than a single person, pumping out tens of thousands of software titles. Many have credited this open architecture with the extraordinary growth and richness of the computer industry of the 1980s and 1990s,25 compared to the relatively stately pace of innovation in the closed architecture era. However, in the early 1990s, many software firms complained that Microsoft, the firm that controls the dominant PC operating system (the conduit), has used its OS control to unfair competitive advantage in the applications (content) market, such as word processors, spreadsheets, and presentation graphics.26 After considering such complaints, the

24 This is not to say that cable or broadcast firms actually produce their own content (although broadcasters do produce their own news shows), but rather they control the content, which they generally purchase from outside entertainment suppliers.

25 See, for example, Kapor, M., 1993, “Where Is the Digital Highway Really Heading? The Case for a Jeffersonian Information Policy,” Wired Magazine, 1(3) July/Aug.

26 This is a much different issue than the current litigation by the Department of Justice of Microsoft, in which the focus is the alleged attempt by Microsoft to “leverage” its power in the

– –

Draft – for comment only 02/04/99

20

Department of Justice did not prosecute, reaching a relatively mild agreement with Microsoft in 199527 that it cease certain practices. No one seriously suggests that Microsoft should not be permitted to compete in the applications software market. However, the example brings home the fact that vertical integration of content and conduit is certain to give rise to contention of market abuses, if not actual abuses, and constitutes a public policy problem, either regulatory or antitrust.

In sum: universal service with appropriate service quality, the control of monopoly pricing, and open architectures can be achieved with competitive markets, at least in some cases. However, regulation and/or franchise control have traditionally been the chosen instruments in virtually all electronic network infrastructure industries. In the case of broadband networks, the question is, which is the more appropriate means of achieving the public policy objectives? It is this question to which we now turn.

6. Public Policy: What Needs To Be Done?

The four issues raised in the previous section present an interrelated set of problems for which various interest groups expect a public policy response. Fortunately, the US Congress, in the Telecommunications Act of 1996, has established a pro-competitive context in which state regulators and legislators, as well as Federal regulators can respond. However, control of telecommunications in the US is fragmented among 52 local jurisdictions plus the Federal level, suggesting that progress within this framework and the policies adopted may be quite varied, even contradictory. The process by which the individual states and the nation as a whole comes to understand what needs to be done is likely to be drawn out over the better part of a decade, after which there will no doubt continue be some variation among jurisdictions. However, it needs to be understood that the role of government in the evolution of the Internet is not monolithic, but can be expected to involve many jurisdictions, some of whom may be acting against one another at any point in time.

Universal Service

The universal service issue for broadband two-way networks is currently relatively quiescent. A Federal-state Joint Board (of FCC and state regulatory commissions) which is charged with considering universal service issues in the light of the Telecommunications Act of 1996 has on its agenda the Clinton Administration’s proposal to provide “basic” service at 1.54 Mbps to the nation’s schools,28 clearly a broadband issue. Generally, however, few have supported a universal service concept

OS (conduit) market to dominate the emerging Internet browser market (which itself can be viewed as an alternative conduit market).

27 United States v. Microsoft Corporation, Civil Action No. 94-1564 (1994) U.S. District Court of the District of Columbia, (“Final Judgment” entered August 21, 1995).

28 1996, “Focus on Universal Service,” Telco Competition Report, BRP Publications, Oct. 24.

– –

Draft – for comment only 02/04/99

21

of a broadband link into every home in the US, an enormously capital-intensive venture. Restricting the universal service concept to below-cost provision of broadband to schools (and possibly libraries) ensures that this will be a non-issue.

However, this may change if there is a substantial increase in the demand for broadband in rural areas or from disadvantaged groups. This demand would translate into political action that could re-define universal service to include broadband, possibly fiber to the home (or curb). Should this occur relatively soon, before the industry has had a chance to form, there could well be public intervention to ensure that all suppliers were required to provide fiber service to all households and businesses. In fact, if municipalities are permitted to limit broadband fiber providers by monopoly franchising, as has been done in cable TV, this outcome is highly likely. Even more likely is that those firms who believe they have a good chance of winning such monopoly franchises may press legislators toward universal service as a means of justifying monopoly. It could be argued, as above, that only monopoly can ensure that everyone will be served.

Lesson 6: A new technology which is highly valued by all may lead to political demands for “universal service” from low-income and/or high-cost constituents, which is likely to result in some form of government intervention.

Implications for Managers: Demands for “Universal Service” can be translated into protection from market competition by government fiat. Cable television companies argued in the 1970s that they would only invest in cabling entire cities if they were given exclusive franchises, to their enormous profit. Similar opportunities are likely to accompany the emerging broadband technology deployment.

Lesson 7: If a new technology threatens to lead to a single firm gaining a dominant market position, government may intervene to control this “natural monopoly,” either through regulation or antitrust.

Implications for Managers: Dominant firms can often make the mistake of treating customers poorly, which can lead to a political demand for public control. This occurred in the cable television industry in 1992 when the industry was re-regulated after much public complaint of high prices and poor service. The cellular industry has been more successful at avoiding price regulation, even though each local market was a duopoly (prior to the introduction of PCS). However, antitrust authorities may pursue firms with dominant positions even without a groundswell of popular negative opinion, such as the Microsoft case. This case also illustrates the highly effective influence of government antitrust actions by firms such as Netscape and Sun Microsystems.

Should this occur, it will almost surely be a substantial loss to the nation, for the following reasons:

1. The track record of regulated/franchised monopoly in fostering product innovation has been particularly poor. In the emerging broadband network industry, this form of innovation will be particularly important. Since no one now knows what services will emerge that will capture the interests of consumers, it is essential that firms be

– –

Draft – for comment only 02/04/99

22

permitted to explore the possibilities, that consumers have the maximum choices, and the market be permitted to evolve in as free an open a fashion as possible. Imposing regulation and/or franchised monopoly on this market will surely throttle this needed innovative process, substituting (whether intended or not) the visible hand of government for the invisible one of the market.

2. There is an existing infrastructure for delivering Internet-type services to everyone. Most schools and libraries have some form of access, and most households have telephones, which permit 28.8 Kbps access, which is satisfactory if not perfect for Internet access, at least at present.

3. There is little evidence that broadband access from the home (as opposed to broadband access from the school, or 28.8 Kbps access from the home) constitutes an essential tool for all Americans to achieve equal opportunities, either in the political or economic marketplace. It could, of course, become a valued entertainment distribution channel, but this is hardly a public policy reason to subsidize universal service.

If regulated monopoly is a poor policy choice, is it at least better than competition? Recent research29 on competition for broadband access has shown the following results:

1. For “reasonable” estimates of cost and demand for broadband distribution, it appears likely that major metropolitan areas may support more than one fiber distributor, but not until demand levels are approximately double present levels.

2. However, competitive deployment of fiber may occur in “rings,” in which the areas of densest population are served by n fiber distributors, the less dense areas are served by n-1 distributors, until the final ring which is served by only one provider. Prices within each ring would reflect competitive conditions. In high-density cities, fiber is likely to serve the entire metropolitan area. In low-density cities, fiber will not extend to outlying areas.

3. If competitors anticipate gains from being the largest provider in this “ring” model, then there are gains from preemptive investment in the early years to ensure that the first mover locks in his advantage into the future. This would lead to more extensive investments, and therefore greater geographic coverage, than a static oligopoly market would suggest. Paradoxically, the dynamic competitive chase for long-term profits may lead the market to achieve the universal service regulators previously believed had to be mandated.

Unfortunately, this happy outcome may never occur. The requirement for universal service imposes a fixed cost on entrants that would constrain the number of fiber providers who would be willing to enter with a universal service constraint.

29 Faulhaber, Gerald R., and Christiaan Hogendorn, “The Market Structure of Broadband Telecommunications” working paper, Public Policy & Management Dept., Wharton School, Univ. of Pennsylvania, August, 1998.

– –

Draft – for comment only 02/04/99

23

Simulation30 analysis suggests that the imposition of a universal service constraint to make service available to (say) 95% of households in a metropolitan area will increase the cost of providing fiber sufficiently that initial entry as well as competitive entry are only feasible at greater demand levels than would otherwise be the case. The reason is that the cost of supplying fiber infrastructure to unprofitable customers may be greater than even duopoly profits from the profitable markets. Thus, imposing the universal service obligation may lead, at certain demand levels, to monopoly, even if unconstrained competition could support multiple fiber vendors. Of course, the price charged under this scenario would be a monopoly price, substantially higher than most customers would pay under unconstrained competition. The only constraint on monopoly pricing in this scenario would be the presence of satellite services, should satellite vendors choose to (and be permitted to) compete.

On balance, then, it would appear that competitive provision of broadband access is superior to any form of regulation or franchising. Further, unless there is significant pressure of rural or disadvantaged groups for below-cost provision of broadband access to the home, it should be relatively easy for legislators, regulators, and municipalities to resist vendor demands for monopoly franchises. The policy direction established by the Telecommunications Act of 1996 should provide a rationale for policy makers to take the competitive option.

It is important to realize, however, that it is unlikely that two or more fiber providers will enter the market simultaneously. It is more likely that a single firm will enter, possibly expanding its service area over time, most likely competing with satellite providers. The second fiber firm may not enter for several years, when demand levels are sufficiently high to support two providers. During this interim period, the temptation to regulate the monopoly may be quite strong. It is critical that this temptation be resisted, as efficient and innovative competitors are unlikely to emerge in a regulated environment.

Quality of Service

The evolution of the Internet into the two-way broadband network of the future has been both exciting and painful. The network itself, its administrative support, and its governance structure were all designed for a much different environment. Institutions and infrastructure designed to meet the needs of university researchers around the world are quite unsuitable for the high-growth, high-volume, commercialized mass- market service the Internet has become in the last year. What is amazing is not that the Internet is congested (which it clearly is31), but that it has not collapsed under the crushing weight of unprecedented traffic volumes. The problem is clear: investment in Internet capacity has not kept pace with the growth of demand, leading to a slow-down of the Internet. In some places (such as transoceanic traffic) and for some uses (such as

30 Faulhaber and Hogendorn, op. cit.

31 See, for example, “Why the Net Should Grow Up,” Economist, Oct. 19, 1996.

– –

Draft – for comment only 02/04/99

24

telnet and real-time video), this increase in congestion has made the Internet almost unusable.

Does this call for a public policy intervention? The days in which the US government directly provided network capacity or even managed network capacity are long gone. At best, the government can assist in the transition to new institutions, which can then address the capacity problem. In fact, the new governance structures discussed in the previous section are precisely the correct public intervention. As the Internet continues its transformation from enthusiast’s toy to mission-critical infrastructure, pricing, revenue-sharing, and investment incentives will be put into place to ensure to smooth and rapid response of service providers’ capacity to demand changes.

Monopoly and Vertical Integration

There is a reasonable chance that limited competition in broadband may emerge. Not only may there be more than one fiber provider, there is likely to be wireless coverage as well. Further, existing infrastructure providers are currently developing technologies that increase the effective bandwidth of their infrastructure. Cable firms are experimenting with cable modems, which promise in-bound speeds of 10 Mbps, although there is some concern over how much bandwidth cable systems have for heavy Internet usage. Telephone companies are experimenting with Digital Subscriber Line (DSL), a technology that will permit over 1 Mbps in-bound over existing telephone lines. All these technologies would certainly compete with each other, provided they are actually deployed.

And therein lies the concern. It is possible (some would say likely) that after all the grand announcements, alliances, IPOs, and other fanfare, only the telephone companies will actually lay fiber to the curb, and thereby control the only broadband two-way distribution channel into the home. In that case, two problems confront public policy makers. The first is the classic problem of monopoly: a firm takes advantage of its market position to charge prices higher than costs. The second is the problem of access control: the monopoly firm chooses the content its users can access, which limits both its customers as well as potential suppliers. Monopolies tend to be closed architecture systems, with a limited choice controlled by the “bottleneck” supplier.

On balance, it is likely that the second problem would be more serious than the first. If it is the case that the market can only support a single supplier, then it is likely that monopoly prices are not very much higher than total costs; if they were, then the market could support more than one supplier. Of course, it could be that the monopoly might be a temporary one, until other firms can deploy resources to compete. In this case, it is particularly important that antitrust authorities be alert to attempts by the incumbent to raise potential rivals’ entry costs, or other anticompetitive behavior. In any case, this would appear to be a problem of the antitrust authorities.

The second problem is somewhat more difficult. Should a single firm be the monopoly supplier of broadband distribution, it is likely to control content, increasing its profits through price discrimination among content providers. By analogy with the IBM- dominated computer market of the 1970s, we would expect proprietary content

– –

Draft – for comment only 02/04/99

25

provision in a closed architecture, without the profusion of content and access that a more competitive market would provide. If such a monopoly emerges, or emerges even temporarily, how should policy makers respond?32

Fortunately, the FCC already has adopted the Open Video Systems approach, in which telephone companies (indeed, any OVS supplier) providing video distribution to the home is required to provide access to any content provider that wished to use the supplier’s capacity under the same terms and conditions that it supplies its own content provider. In this model, the facilities supplier is not enjoined from providing content; but it is required to make its facilities available to other content providers under the same terms and conditions it offers its own content provider.33 While this approach is not without problems, it does represent a regulatory approach to convert an otherwise bottleneck facility into an open architecture system.

In fact, this is a good example of a regulatory intervention that opens up markets to a far richer supply structure than would otherwise obtain, and certainly far richer than would obtain under traditional rate-base rate-of-return monopoly regulation. Should temporary monopoly of two-way broadband facilities become a problem, then this relatively light touch of regulation designed to open access to any content provider is an effective solution to that problem.

Lesson 8: If the technology leads to firm dominance in a “bottleneck” market, there will be a political demand for government to limit the dominant firm’s ability to vertically integrate.

Implications for Managers: If your firm must use the bottleneck market, you should lobby hard to keep the bottleneck firm out of your market through government-imposed restrictions on vertical integration. If you are the bottleneck firm, aggressive “ultra-competitive” behavior on your part toward competitors is sure to invite public scrutiny with possibly unwanted restrictions resulting. Realize also that how aggressive the government is in pursuing allegations of anticompetitive behavior change with administrations.

7. Conclusions

The past, present, and future of the Internet provides a rich case study of the role of government in emerging technologies. While Internet aficionados and e-commerce entrepreneurs view this new world as totally free-market and highly competitive (which it is), it is perhaps surprising that it has been and will continue to be thoroughly suffused by government interventions, and thankfully timely dis-interventions.

32 This analysis draws on Kaplan, James, 1996, “Integration, Competition, and Industry Structure in Broadband Communications,” Wharton School Advanced Study Project Paper.

33 FCC Report and Order and Notice of Proposed Rulemaking, Docket 96-99, March 11, 1996.

– –

Draft – for comment only 02/04/99

26

Up until quite recently, the Internet has been almost exclusively a US phenomenon. This is about to change, and therein lies perhaps the greatest governmental challenge that the Internet community faces in the next few years. How well can we deal with political and legal issues across a wide range of countries, many with divergent interests and cultures? As in the past, the abilities of governments and the Internet community to adapt to this dynamic emerging technology will determine whether or not its potential can be realized.

find the cost of your paper