Bachelor Thesis, 2008
34 Pages, Grade: 1.3
The Internet as a global communication infrastructure
1. The Internet as a critical resource to society
2. What is the Internet? Main characteristics of the Internet architecture
3. Technology and society - the academic discourse
4. Contradictions and the lack of clarity in the LTS approach
5. Technology and society - a counter proposal
Internet and society in A mutual system / environment relationship
6. What constitutes a system, what does a system constitute?
7. Fundamental system elements and system operations
8. System types
9. System differentiation
10. Loose and strict coupling
Internet governance as contextual intervention
12. Internet governance
13. Internet governance as contextual intervention
Appendix A: Bibliography
Appendix B: List of abbrevations
“ If we look at other innovative technologies that fundamentally trans- formed human communications - the printing press, the telephone and television, to name a few - we are confronted with the fact that it takes generations for their full effects to be understood. ”1
While the Internet has become a critical infrastructure to modern society, social scientists, economists, regulators and engineers still try to figure out the entangle- ments of policy models, market dynamics and technological principles which some- how seem to be inextricably connected and shape the ‘evolution’ of this technology.
Looking back, we do know a lot about other large infrastructures which had a massive impact on society once they gained momentum. We have, for example, quite a good idea about how and the railroad network developed and which technical, organisational and regulatory hurdles had to be taken in order to interconnect frag- mented regional railroad systems (as it was the case in the United States). In contrast to railroad tracks, Internet infrastructure is quite invisible in many regards.
The biggest concern for a social scientist in this regard is probably the ques- tion how this technology is shaped by - and shaping - society. As Bernward Joerges states it in a publication concerned with ‘Large Technical Systems’ (LTS): “LTS seem to surpass the capacity for reflexive actions of actors responsible for operating, regulating, managing and redesigning them in ways which, as social scientists, we understand poorly. How do we account for and explain the (most of the time) relative stability of such systems?” (Joerges 1988: 26).
How can we model the linkage between this large technological system and so- cial systems? In how far is business policy, regulation and expertise determined by or determining technological design principles? Is the current governance constellation sustainable? Does it meet society’s current and future demands on a global communi- cation infrastructure?
This paper does not address all of these questions, but rather tries to develop a theoretical model which is capable to identify some (mutual) dependency relationships between Internet infrastructure development and society.
The first part of this text is concerned about the Internet as a global communica- tion infrastructure. It will be argued that the Internet, although originally not planned as a universal, global communication technology, has indeed become indispensable to large parts of society. Its worldwide distribution, its impressive growth rate and the fact that it obviously is an excellent platform for innovation demands explana- tion. Therefore, its technical core principles, which apparently contributed much to this success, will be examined. The focus is then shifted towards the more general question how the relationship between technology and society can be understood in a theoretical perspective. Since the approaches which will be examined here do not satisfy, a counter proposal will be made.
The second part will explore this proposal which conceptualizes technology (in the specific case of Internet infrastructure) and society in a mutual system/environ- ment relationship. Basic assumptions of sociological systems theory will be heu- ristically applied to the Internet as a technological system, and structural couplings between both will be identified. This ‘double perspective’ is regarded as necessary in order to avoid technological determinism on the on hand and a contradictory under- standing of technology on the other hand.
The third part comes back to the notion of governance and takes a look at the present conceptions of “Internet governance”. If the Internet relies on social coordi- nation and cooperation as much as society relies on a global communication infra- structure, how does governance take place, and in how far could structural couplings be understood as opportunities for contextual intervention (Willke 1989)?
The fourth part will present some conclusions on the question which has been the starting point of this paper, as well as on those questions which have arisen in the course of the investigation.
Some of the publications which have had a great influence on my approach and my understanding of society and (Internet) technology shall be mentioned. First of all, the conception of modern society as an autopoietic, functionally differentiated so- cial system whose basic elements are communications leads back to Niklas Luhmann (e.g. Luhmann 1995). The theory has been further elaborated and extended with governance theory by, among others, Helmut Willke (e.g. Willke 2007).
Regarding ICT (Information and Communication Technology) regulation and cognitive issues thereof, I deeply recommend two publications by Johannes M. Bauer (Bauer 2004) and P. H. Longstaff (Longstaff 2003). Finally, the lecture “The Future of the Internet” given by Stanford Assistant Professor Ramesh Johari has given me valuable insights into the dynamics of Internet development; the lecture has unfortunately not been transcribed, but it has been recorded and is publicly available at the “Stanford on iTunes” website (see bibliography, Johari 2007).
“ Society comes to terms with the existence of technology. It proceeds on the assumption that the car will start. ” (Luhmann 1993: 98)
The Internet has become a critical infrastructure of modern society. At least in developed countries, reliable, fast and reasonably priced Internet access is almost a matter of course. According to an OECD study, the number of Internet users contin- ues to grow rapidly. 265 million subscribers to fixed Internet connections have been counted at the end of 2005, 60 percent of them were using broadband access (OECD Outlook 2007: 130). Internet connectivity is seen and traded as a commodity, almost like a natural resource. “Just as the industrial revolution depended on oil and other energy sources, the information revolution is fueled by bandwidth”, Tim Wu recently suggested in a newspaper article.2
Subscribers per 100 inhabitants
illustration not visible in this excerpt
Figure 1: Internet subscribers in OECD countries, source: OECD Outlook 2007, p. 133
But if one examines more closely how data packets make their way from one location to another, it seems to be a small miracle that the Internet actually ‚works‘ at all and that its basic design principles have remained quite unaltered over the last decades. After all, its core architecture, the platform on which applications and serv- ices are deployed, was not designed to become the global communication medium that it is today. Unlike the telephone network for example, the Internet has no built-in accounting mechanism; it cannot even guarantee the delivery of data packets. Due to its packet-switching architecture, the Internet is a patchwork of selectively connected ‘Autonomous Systems’ which exchange and deliver data packets on a “best effort” basis. Unlike the railroad system, traffic routes and interchanges are not centrally planned; in fact, intermediary networks have hardly any idea how their surroundings looks like, and they have no influence on how packets will be routed once they leave their domain. They do not even know what exactly they are transmitting.3
But these issues do not only address network operators or regulators. Even scien- tists admit that due to the lack of measurement instruments, they “[…] don’t really know what the Internet actually is.” (Claffy et al 2007: 56). But uncertainties and ignorance are not merely a matter of complexity; after all, the Internet features some quite effective mechanisms to ‚keep it simple‘. The Internet platform is rather a “stu- pid network”4 by design. It restricts itself to neutral transportation on a “best-effort” basis and externalizes any additional functionality (e.g. delivery guarantees, data en- cryption, error checking) to the applications which run on top of it. From a technical standpoint, it simply would not make sense to implement those features as low-level mechanisms (Saltzer et al 1984). This has, at the same time, tremendous effects on the economy as well as on the legal and the political system. An interesting thought experiment would be to imagine how the ideal global communication network would look like from the perspective of an economist or a regulator.
But the Internet was neither invented by economists nor planned by politicians; its origins can be traced back to a research group related to the Defense Advanced Research Projects Agency (DARPA), and its commercialization took place around 1988, when the National Science Foundation (NSF) enforced an “Acceptable Use Policy”, encouraging the private sector to connect to regional exchange points while at the same time prohibiting backbone usage for purposes “[…] not in support of Research and Education.” (see id.) This clever move suddenly created market incen- tives for private, long-haul networks. In the following years, the transition from a government sponsored backbone network to multiple commercially owned backbone networks took place, shifting the deployment of network resources from public to private actors.5 One cannot probably stress enough what a giant (and risky) leap this transformation of a research project into a privatized patchwork of networks, glued together by market incentives and business policies, has been. On the other hand, turning over the coordination and cooperation mechanisms regarding inter-connectiv- ity to markets does by no means that the economy is in control of the Internet.6
Markets have preconditions which they cannot establish by themselves (Willke 2007: 21); companies are incumbent upon national legislation and regulation; the net- work routing policies, technical practices and acceptable behaviour guidelines have been developed and documented in the form of “Request for Comments”7 (RFC) by scientists and engineers legitimized by expertise. And these are just a few examples of the - sometimes inextricable - entanglement of governance mechanisms which will be examined in this paper.
“ Because no systemic measurement activities exist for collecting
rigorous empirical Internet data, in many ways, we don ’ t really know what the Internet actually is. ” (Claffy et al 2007: 56)
What is commonly called “the Internet” can be separated into two domains: the “platform” (the physical infrastructure of the network) and the applications and serv- ices that run on top of it. Both aspects of the Internet are relevant to society in general or governance issues in specific, but only the former aspect will be considered in the following. It should be noted, however, that both the platform and the applica- tions and services, although operating to a large extent quite separately from each other, form a symbiotic or circular relationship. Applications and services require a sustainable platform and can only develop within its limitations (technology-push); at the same time, the technical requirements and consumer-related demands from the emerging applications and services massively shape the way in which the platform develops (market-pull). Consumer trends, such as the recent increase of peer-to-peer application usage can radically change the distribution of traffic within and between interconnected networks within a relatively short period of time.
The impact of peer-to-peer applications on traffic demographics and network infrastructure The Internet topology has originally been planned and developed as a client-server archi- tecture. In this model, relatively few but well connected servers “serve” the documents and services requested by the clients (end users). These servers usually have a high-capacity connection to Internet service providers. Most of the generated traffic flows vertically, i.e. from content providers through transit providers to access providers vice versa. As in this model the amount of traffic which flows from the client to the server (“upstream”) is most likely much less than the amount of traffic from the server to the client (“downstream), subscriber lines usually offer a high-capacity downstream and a small-capacity upstream.
illustration not visible in this excerpt
Figure 2: Traffic usage patterns. Source: Sandvine 2008
With the rise of peer-to-peer applications (such as Napster or BitTorrent) beginning in the late nineties, traffic demographics began to change: due to developments in end-user bandwidth capacity, most importantly the introduction of ADSL, clients were now able to exchange even large video files among each other. This has led to a massive increase in client-to-client (hori- zontal) traffic which to a large degree re-shaped the average usage of deployed infrastructure. A recent study on traffic demographics in Northern America conducted by a Canadian net- working equipment company highlights the relationship between platform and applications: “The design of these networks, which dictates that downstream traffic has more available bandwidth than upstream traffic, was originally based on usage patterns and usage behavior from early content-consuming applications such as web-browsing. However, the continual evolution of applications from content-consuming to always-on content-supplying means that current traffic patterns and usage behavior no longer fit these bandwidth assumptions.” (Sandvine 2008: 7)
Conversely, technical advancements on the platform level, for example band- width availability, promote the development of new services such as video streaming and Internet telephony. Without going into too much detail here, it should be clear that the relationship between platform and applications / services is neither trivial nor linear. As it is the platform which is of concern here, a first working definition will be introduced. The further usage of the term “Internet”8 in this paper will refer to the platform (infrastructure) level only if not explicitly stated otherwise.
A technical definition of the Internet would be: The Internet is a global end-to- end network of independently owned and operated computer networks which transmit data by packet-switching using the TCP/IP protocol suite. Five aspects of this defini- tion are crucial for this paper’s topic and require some further explanation. First of all, the Internet is a global network. Although it has almost exclusively been devel- oped by scientists within the United States, it certainly is a truly global technology nowadays. This makes it quite interesting to examine how its deployment developed under distinct legislatures and regulatory frameworks and what challenges it imposes to sovereignty.
Secondly, the Internet is an end-to-end network. The end-to-end (e2e) principle states that network functionality (and therefore, complexity) should be as little as possible at the core and instead be ceded to the very periphery of the network, the clients. Error checking, data encryption, duplication check etc. is performed by the applications which run on the source and destination of a connection. “The general rationale behind the e2e model is that the network doesn’t have to know the appli- cations running on it because it’s simply a neutral transport medium.” (Claffy et al 2007: 55).
Thirdly, the Internet is not a single network, but a network of independently
owned and operated networks. Despite our daily user experience, for example when browsing the World-Wide-Web, the computer at use is not connected to, or even part of, the Internet. Instead, it is connected to an independently owned and operated network (also called Autonomous System) which in turn is connected to some other independently owned and operated networks by traffic transit or exchange agree- ments. Transit and peering agreements are two kinds of bilateral business contracts that govern the terms under which Autonomous Systems pay each other for connec- tivity. In a transit agreement, traffic that is exchanged has to be paid for (e.g. a back- bone provider will demand payments for the long-distance connectivity he offers). In a peering agreement, both sides agree that they will exchange traffic for free (within certain parameters); this usually takes place among AS of approximately the same size (see also figure 6 on page 24). Universal connectivity, therefore, is something which has to be achieved by business contracts - it is neither a technical necessity nor can it be achieved by regulation exclusively.
The fourth important characteristic of the Internet is that data packets are trans- mitted by packet-switching. The easiest way to describe the way packet-switching works is to begin with its counter principle, circuit-switching. In telecommunications, a circuit switching network establishes the connection between source and destina- tion as a fixed bandwidth circuit (or channel), as if they were physically connected by an electrical circuit. Packet switching in contrast means that data is not sent through a dedicated, physical circuit like telephone calls. Instead, the data is rather ‘deliv- ered’ like a parcel. Packet-switching breaks all data which has to be transmitted into small units called ‘packets’. Every packet must contain additional information (the ‘header’) so the nodes of the network can determine how to route the packet.
illustration not visible in this excerpt
Figure 3: Packet-switching and packet management
The fifth characteristic of the Internet is its set of communication protocols, the TCP/IP protocol suite. A protocol suite is like a language: it ensures that the con- nected hosts understand each other. This protocol suites encompasses a variety of different protocols for different purposes, of which TCP (responsible for reliable data transmission) and IP (responsible for the routing of the data packets) are the most prominent ones.
1 Vint Cerf in a BBC article. Available online: http://news.bbc.co.uk/2/hi/technology/6960896.stm
2 New York Times, July 30, 2008: OPEC 2.0 by Tim Wu
3 There are some exceptions: the Chinese government has implemented a low-level packet analysis (“deep packet inspection”) in order to censor undesirable content. Some Autonomous Systems use this technique as an intrusion detection / prevention system (IDS/IPS), or to provide the government intercept capabilities.
4 Isenberg, David (1997): Rise of the Stupid Network. URL: http://isen.com/stupid.html
5 For a more detailed history of the Internet, see Leiner et al (2003).
6 Some critics argue that by turning over the Internet Protocol (IP) address system as well as the Domain Name System (DNS) to markets, U.S. Government has given away both key components of the Internet (transmission and addressing) to private control (Shah / Kesan 2007: 3f).
7 “Request for Comments” (RFC) is a knowledge base organized by the Internet Engineering Task Force (IETF) describing and discussing research, innovations, methods, and behaviours applicable to the working of the Internet and Internet-connected systems.
8 Originally the term ‘internet’ referred to generally any network of computer networks whereas the term the Internet (with a capital first letter) referred to the specific, publicly available network of networks we usually speak of.
Scientific Study, 13 Pages
Research Paper (undergraduate), 16 Pages
Research Paper (undergraduate), 19 Pages
Doctoral Thesis / Dissertation, 107 Pages
Research Paper (undergraduate), 21 Pages
Master's Thesis, 140 Pages
Essay, 2 Pages
Term Paper (Advanced seminar), 26 Pages
GRIN Publishing, located in Munich, Germany, has specialized since its foundation in 1998 in the publication of academic ebooks and books. The publishing website GRIN.com offer students, graduates and university professors the ideal platform for the presentation of scientific papers, such as research projects, theses, dissertations, and academic essays to a wide audience.
Free Publication of your term paper, essay, interpretation, bachelor's thesis, master's thesis, dissertation or textbook - upload now!