Web 2.0 in a Bigger Context – Social and Macro-economical Implications


Tesis, 2006

76 Páginas, Calificación: 1,6


Extracto


Table of Contents

1 Introduction
1.1 Problem Definition and Motivation
1.2 Research Design and Methodology

2 Approaching Web 2.0
2.1 Definitions and Origin of the Term
2.2 Central Ideas of Web 2.0
2.3 Semantic Web and Web2.0

3 Web 2.0 Navigator - a Three Layer Model for Web 2.0
3.1 Developing the Web 2.0 Navigator
3.2 Technological Layer
3.2.1 AJAX
3.2.2 XML
3.2.3 API
3.2.4 RSS
3.2.5 SPARQL and RDF
3.3 Conceptual Layer
3.3.1 User Participation
3.3.2 Social Networks and Communities
3.3.3 Collective Intelligence and Group Decision Making
3.3.4 Folksonomy (Tagging)
3.3.5 Long Tail
3.3.6 Mash Ups
3.4 Application Layer
3.4.1 Online Community Systems
3.4.2 Blogs
3.4.3 Corporate Blogs
3.4.4 Wikis
3.4.5 Case: Wikipedia.org
3.4.6 Instant Messaging
3.4.7 VOIP

4 Social Effects
4.1 Social Life
4.2 Social Capital
4.2.1 Study: How the Internet is Affecting Social Capital
4.2.2 Case: Campaigns Wikia - Web 2.0 in Politics
4.3 E-Learning
4.3.1 Web 2.0 and Education
4.3.2 Case: Educational Blogs for University Students
4.4 Social Commerce
4.4.1 Social Commerce - Social Networks and the Transparent Product
4.4.2 Case: Spreadshirt - Splitting Trade and Distribution

5 Macro-economical Effects
5.1 Economical Opportunities
5.1.1 Economical Opportunities - Harvesting Global Cooperation
5.1.2 Case: InnoCentive - The Future of Corporate R&D
5.2 Digital Divide
5.2.1 Digital Divide - the Gap between High-Tech and No-Tech
5.2.2 Case: One Laptop per Child - The 100$ Laptop

6 Outlook and Conclusion

Bibliography

List of Figures

Figure 1: Illustration comparing Web 1.0 with Web 2.0 services

Figure 2: Different categories of Internet applications

Figure 3: Web 2.0 Navigator - a three layer conception of Web 2.0

Figure 4: The traditional model for Web applications compared to the Ajax model

Figure 5: SPARQL query and its translation into a relational operator tree

Figure 6: Tag cloud Web 2.0

Figure 7: Typical long tail curve

Figure 8: Deployment of blogs in a corporate environment

Figure 9: VOIP architecture

List of Abbreviations

illustration not visible in this excerpt

Keywords:

internet, Web 2.0, user participation, social networks, collective intelligence, folksonomy, longtail, mash up, AJAX, XML, RSS, SPARQL, blog, wiki, instant messaging, VOIP, social capital, education, globalization, digital divide

1 Introduction

1.1 Problem Definition and Motivation

Recently, a very prominent addition to the internet related vocabulary has been introduced: the Web 2.0. The term was originally coined by Dale Dougherty of O'Reilly Media in summer 2004 and emerged while brainstorming on a name for an innovative internet conference. The emergence of Web 2.0 as an important novum to the internet community gets obvious to the user just by browsing the internet. The term seems to have become a ubiquitous buzzword. Unfortunately the term has been hyped without a collective understanding and clear cut definitions are missing. Gaining a vital understanding of the influences of the Web 2.0 on the social and (macro-) economical context is being hindered by this lack of clear understanding.

As a first approach to grasp the term, the online encyclopaedia Wikipedia is quoted to get a current overview on what is commonly understood as Web 2.0 at this very mo- ment in time.1 The new Web is described as a “second phase of development of the World Wide Web, including its architecture and its applications.” It is referred to as a computing platform, a “transition from isolated information silos to [seamless exchange of] content and functionality.” The new Web is expected to become “a computing plat- form serving Web applications to end users.” Additionally its nature as being a social phenomenon by enabling “Open communication, decentralization of authority, freedom to share and re-use” and its increasing orientation “toward[s] interaction and social net- works” is being pointed out. The different approaches used to define the Web 2.0 proove that the nature of the new Web seems to be rather complex and not easily de- fined, so are the social and economical implications related to its appearance.

Some concepts for defining the Web 2.0 already exist. Authors already took the com- plex nature of Web 2.0 into account and developed models distinguishing different di- mensions of the new Web.2 But there is still demand for a comprehensive model grasp- ing as many related features and topics as possible. This paper suggests a three layer model to embrace the full extent of the new Web. To develop such a model, extensive studies on current literature, which was to a major part available only online, have been conducted. The most important issues regarding conception and layout of the Web have been identified and integrated into the “Web 2.0 Navigator”. Therefore all occurring issues have been grouped and arranged to a three layer model, differentiating the tech- nological layer, the conceptual layer and the application layer. While the technological layer displays the Web 2.0 as an architecture, the conceptual layer covers the philoso- phy, which is considered by the author as the central innovation around Web 2.0. The application layer finally is a collection of widely used services and applications around the web, which have been developed hand in hand with the architecture and have been strongly promoting the rise of the Web 2.0. It catches the Web 2.0 as a service oriented platform, allowing for the applications introduced. The developed model was further used to point out important effects and implications. The effects have been grouped to individual, social and macro-economical effects to display once again the invasive and comprehensive nature of the Web 2.0 and its versatile relations to the real life context.

The intention of this paper is to suggest a clear-cut definition of the Web 2.0 to allow for an accurate understanding. Furthermore the developed model is used to examine evident and possible effects on the social and macro-economical environment and de- rive implications.

1.2 Research Design and Methodology

The research was designed to meet the intentions expressed above. The logical structure of the paper takes into account, that a meaningful and relevant analysis of the effects can only be realized through a comprehensive understanding of the whole conception of the Web 2.0 along all relevant layers. This is inevitable to set Web 2.0 into a significant social and macro-economical context.

After the introduction is given, first attempts to approach Web 2.0 are described. Firstly by presenting different definitions available and furthermore by selecting and introducing the core concepts: user participation, the Web as a platform and the attempt to realize a higher degree of interoperability. The section closes by relating the Web 2.0 to the Semantic Web. Therefore similarities are being displayed and important differences between the two concepts are being outlined.

The main part of the paper is split into two sections. First, the “Web 2.0 Navigator” is being introduced, its development explicated and each layer of the model closely exam- ined. The technology behind Web 2.0 is shortly explained, so are the applications and services. Both layers are being linked by a deeper examination of the concepts and their relevance. The second section introduces selected areas of real life where effects of the new Web are evident. Due to highly complex interrelations of the Web 2.0 with almost any domain of the real life, it is not possible in this paper to capture all affected areas of interest. Therefore, and to ensure a practical analysis with a high degree of relevance, certain areas, such as social life, education and effects on the digital divide are being examined and selected examples with a high practical relevance are given for each topic. This paper suggests a three layer model to embrace the full extend of the new Web. The model is further used to explain emerging social and macro-economical ef- fects and to relate the effects with prominent features of Web 2.0, as displayed in the model.

Despite the fact of a solid inacceptance for the use of articles from the online encyclopaedia Wikipedia in a scientific context, the author consciously decided to make use of this source. The specific use of selected articles seems adequate to provide accurate and current information and to embrace the dynamic character of the topic. Selected Wikipedia quotes will, in a very Web 2.0 manner, support an analysis founded on current and up-to-date collective intelligence.

"When people say ‘ Web 2.0 ’ now, I have some idea what they mean. And the fact that I both despise the phrase and understand it is the surest proof that it has started to mean something."3

2 Approaching Web 2.0

2.1 Definitions and Origin of the Term

The term Web 2.0 was initially coined by Dale Dougherty, employee of the O’Reilly publishing house, and Craig Cline, employee of the Media Live event agency. While looking for a name for an innovative internet conference, the term Web 2.0 emerged during a brainstorming session of the two.4 Based on the comparison of regular internet services and their innovative follow-ups, the term was used to describe the evolutionary step between old and new service approaches. The following table shows a common comparison of Web 1.0 and Web 2.0 internet services and their companies.

Figure 1: Illustration comparing Web 1.0 with Web 2.0 services

illustration not visible in this excerpt

Today there is still no distinct definition (a la Encyclopaedia Britannica) for the term Web 2.0 available. On the other hand there is wide agreement on a fundamental under- standing of what Web 2.0 could be. Most of this understanding was undoubtedly stamped by Tim O’Reilly, founder of the O’Reilly publishing house, in 2005 through various essays and blogs, where he published his ideas in distinct Web 2.0 fashion. The following short definition is taken from his essay titled "Web 2.0 Compact Definiton?"

"Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and ser vices in a form that allows remixing by others, creating network effects through an "architecture of participation," and going beyond the page metaphor of Web 1.0 to deliver rich user experiences."5

While this compact definition already includes various important points about the new Web 2.0 philosophy, such as user participation, mash ups, networking and rich user experience, O’Reilly still refuses to establish a fixed and ultimate definition. His idea of the dynamic nature and the state of the “perpetual beta”6 version of Web 2.0 made him - again very in Web 2.0 manner - initiate a discussion among the internet community about the proposed definition in his blog.7 This rich discussion was condensed again in his essay "What is Web 2.0 - Design Patterns and Business Models for the Next Genera- tion of Software".8

As already mentioned in the introduction, many core ideas are merely new, but the con- cept of naming and summarizing all these techniques, concepts and applications within one term seems to be the idea of the term Web 2.0. This new philosophy can be under- stood as continuing logically from the first concepts of social software in 1979 untill today, and was ideologically closely related to the open source community.9 Paul Gra- ham stated in his essay “Web 2.0“, which is already quoted above, that the final inten- tion about introducing the term Web 2.0 was to point out the fact that the internet was about to gain new relevance after the burst of the Dot-Com-Bubble in 2000, instead of simply promoting a “new version” of the internet.10 Therefore, the term was chosen to be rather a striking bullet to assure attention throughout the internet community instead of being perfectly defined from scratch.

It is obvious that Web 2.0 is neither a collection of (partly) new internet technologies, nor just a new concept for internet applications, but a term able to subsume all three layers introduced in the “Web 2.0 Navigator”, an approach to grasp this new internet philosophy. Taking all concepts, ideas and potential difficulties mentioned above into consideration, the author defines the term Web 2.0 in the context of this paper as fol- lows:

Web 2.0 can be defined as a new holistic internet philosophy, which is par tially already emergent among what is today called Web 1.0, using changing paradigms for as well the technical, the conceptual and the application layer and enabling a service oriented internet platform for high user participation, enhanced networking and rich user experience.

2.2 Central Ideas of Web 2.0

This paragraph provides a short introduction on the central ideas of Web 2.0. Major novelties in the context of Web 2.0 emerge on both the technical as well as the concep- tual layer. At this point of the paper, it is important to point out two main concepts that strongly influence and shape the architecture of the new Web: the Web as an operating system with websites as applications and the ideal of wide and deep user participation.11

"Platforms frequently are referred to as operating systems [...]"12

According to O’Reilly, the internet should be seen as a platform with extendable quali- ties. He states that “A Platform Beats an Application Every Time.”13 Therefore the internet should be understood as an operating system for internet applications. Furthermore the applications themselves should exhibit this extendable quality and become active parts of the platform. "The phrase 'Web as platform' refers to the fact that as web sites start providing their own APIs, they too are becoming a platform on which other programs can be built."14 To offer this quality, the development and spread of Application Programming Interfaces (APIs) throughout the Web is crucial. A possible use for these APIs are mash ups, which means combining different sources of data and information and combining them into a new service.15

The founder of "User Interface Engineering"16, Jared M. Spool, states in his essay "Web 2.0: The Power Behind the Hype"17 the importance of APIs for the further development of Web 2.0: "One tool that is making this all possible is the increasing availability of Application Programming Interfaces (APIs) [...] It isn’t just the big boys who are creat- ing these APIs. [...] Even specialized niche applications are starting to make APIs avail- able.” Another important point to promote the development of mash ups is the simplic- ity of the developed APIs. Common examples for simple and wide-spread protocols are RSS and XML. A deeper engagement on RSS and XML will follow in Chapter 3.1.

Mature internet applications will have the power to substitute the classical software, installed on the local computer. Neither installation nor updates have to be undertaken locally by the user but will be handled by the provider of the application. In the long run, O’Reilly expects the end of the classical software life cycle and the conventional way individuals use applications.18 Already widely used and accepted in this context are Web mail applications such as Google Mail19, online calendar services and on a bigger scale the platform business of SAP, which was introduced through the MySAP and Net- weaver platform.20 Ultimately O’Reilly sees the decline of Microsoft and the substitu- tion through companies offering Web 2.0 based applications. In that case Web 2.0 can be considered to be a highly disruptive technology.

"Cal Henderson, the lead developer of Flickr, recently revealed that they deploy new builds up to every half hour. This is clearly a radically different development model!"21

In this quote, Cal Henderson describes the need for a constant change within Web 2.0. This meets O’Reillys quest for the principle of “the perpetual beta” version which, by concept and definition, can never really reach the pinnacle of development. This ideal is once again derived from the open source concept of “Release early. Release often. And listen to your customers [...]"22 which strongly integrates the user into the ongoing de- velopment and enhancement of the software. This concept leads to the deployment of high user participation, which is a vital part of the emergence and development of Web 2.0.

Even though the concept of user participation is not an exclusive feature of the Web 2.0 and can be tracked back almost to the beginning of the development of the internet, the status and relevance of user participation has changed significantly.23 Web 2.0 enables new and more dynamic ways to make use of collective intelligence, a topic which will be covered in depth in chapter 3.2. Furthermore, the development of enhanced social collaboration infrastructure is a direct sequel of the architectural layout of the new Web.

Prior to the Web 2.0 standard, internet pages were created to offer content to the customer.24 The content creation was in the hands of the page owner and therefore the user was limited to a state of consumerism. The relationship between content provider and content consumer was clear cut and the structure of power was rather asymmetric. While later on publishing through personal websites, homepages etc. also became an option for the former consumer, the static nature and the technical entry barriers for the regular internet user were stable and hard to overcome.

Especially through the emergence of blogs and wiki systems, which will be covered in chapter 3.3, the passive consumer became an active participant through the possibility to provide content and to take part in discussion and exchange. This mass participation is not just another useful feature of the new Web but the central idea and philosophy behind the concept. The implementation of active end-user participation can be consid- ered as the crucial novum around Web 2.0. User participation in all possible means, from the development and improvement of the platforms through intense customer con- tact loops25 during “the perpetual beta”, the content generation through wiki systems, blogs and social bookmarking systems, the self organization and references through collaborative tagging, also referred to as folksonomy, to the reflection and adaptation of existing content through discussion forums. For the concept of participative software, the user community provides data and content and the platform and applications just give the standardised yet flexible frame to handle and present the content.

2.3 Semantic Web and Web2.0

The Semantic Web and Web 2.0 are partly related concepts, nevertheless using the terms as synonyms for each other is a common mistake as both concepts show signifi- cant differences. The following chapter will show the relationship between both con- cepts and will mark out important and fundamental differences between the two.

The Semantic Web was firstly described by the pioneer of the World Wide Web Tim Berners-Lee. The intention of the CERN scientist was to lay out the concepts for the following major step in the development of the internet.26 The Semantic Web offers the possibility to add descriptive data27 to content which enables machines to relate the data to a certain meaning. The data has become Semantic. Semantic data offers various bene- fits: the data can be interpreted not only by the human user but also by the computer itself. The computer can identify meaning and relationships of the data and can assist or enable structural pre-processing.28 Besides the simple comparison with database data, further processing through the use of descriptive metadata can be covered automatically by the computer.29

As a common standard for computer-readable metadata, the RDF30 is derived directly from the basic human grammar and adds the Semantic quality to the data by using tri- ples of information. Meaning and relations are described through subject, predicate and object. This enables the computer to understand and generate logical nexuses and opens the door for new automated content management systems, computer processed data structuring and administration.31

A similar concept is used within the Web 2.0 architecture. Metadata is also added to the content to add meaning to the bare data. Tagging means incorporating meta data by giv- ing tags to articles, texts, pictures or blogs. These tags can support navigation through the massive amount of data, such as blogs and pictures. Automated tag clustering and tag clouds offer further aggregation of metadata.32 Tagging, also referred to as folkso- nomy, will be covered in chapter 3.2. Consequently, the basic idea of the Semantic Web is already deployed in the context of Web 2.0. The crucial difference is that tagging does not offer a Semantic grammar as RDF does, but only simple tags without relation- ships and further descriptive opportunities through the use of subject, predicate and ob- ject. Even though tags may support the user, its meaning for computers and the auto- mated procession is limited and therefore cannot meet the demands for the Semantic Web defined by Berners-Lee.33 Another fundamental concept of the Semantic Web, the ontology34, can not be met by the current Web 2.0 platform. Ontology describes the relationships of different objects and enables the computer to draw logical conclusions automatically.35 The Tagging on the other hand is a rather random approach, highly dependent on the individual understanding of the person tagging the content. Tagging can lead to non-logical relationships.

Another demand of Berners-Lee for the Semantic Web is the existence of clear cut standards to grant a global interoperability of Semantic Web applications. Even pro- grams developed totally apart from each other can exchange meaningful information and can interact without the need for another exchange interface. This is handled by the implementation of Middleware Application Servers (MAS).36 Again, Web 2.0 architec- ture grasped the main idea of the Semantic Web and tries to realize inter-operability through the development of APIs, including integrated interfaces for HTML and XML standards. For further information on XML, see chapter 3.1. This approach seems to be promising but is still far from Berners-Lee’s idea of global inter-operability.

Concluding remarks are, that the Web 2.0 is still lacking fundamental qualities of the ideal Semantic Web as described by Berners-Lee and therefore can be seen as an important but intermediate step towards a real Semantic Web. The following illustration gives an overview on the different categories and evolutionary steps of Web applications and helps to relate the Semantic Web to Web 2.0. As the figure illustrates, the Web 2.0 covers a range of features on the evolutionary path to the Semantic Web.

Figure 2: Different categories of Internet applications

illustration not visible in this excerpt

3 Web 2.0 Navigator - a Three Layer Model for Web 2.0

The Web 2.0 Navigator was developed to offer a comprehensive model for portraying the complex nature of the Web 2.0. It defines the Web 2.0 along three different dimensions: the technical layer, the conceptual layer and the application layer. Furthermore it displays three different categories of human domains, which are affected by the medium: the individual, the social and the macro-economical domain.

This model has been developed due to the lack of a clear cut definition for the ‘buzzword’ Web 2.0 in both scientific literature and in the current understanding of the internet community.

Figure 3: Web 2.0 Navigator - a three layer conception of Web 2.0

illustration not visible in this excerpt

3.1 Developing the Web 2.0 Navigator

The Web 2.0 Navigator was developed to encounter the spongy or even missing definitions of the term Web 2.0. It is based on extensive studies of scientific literature, current online discussions and several essays and magazine articles around the topic Web 2.0. These sources were used to identify constantly occurring patterns of technologys, concepts and applications or services. Those have been collected and related to each other. They were clustered and grouped into three independent layers.

While the technological layer seems to be just the underlying architecture and the application layer the collection of available and possible deployments of the Web 2.0, the conceptual layer links the two and constitutes the core innovation of the Web. Grasping the correlations comprehensively allows a better understanding of the vast term Web 2.0 and offers even deeper understanding for the concept itself. This understanding is essential for a meaningful analysis of the possible effects of its spread.

3.2 Technological Layer

The technological layer aggregates all important technical standards and formats of the Web 2.0 architecture. They will be introduces separately and their function and rele- vance will be explained shortly. The interesting thing about the underlying technology is that it is mainly a comprehension of already existing standards. Most are in use sine 1998. The innovative and crucial part is the combination of the technologies. Under- standing the underlying technology seems to play an important role in grasping Web 2.0.

3.2.1 AJAX

The Asynchronous JavaScript and XML (AJAX) is a striking novum in the Web 2.0 architecture. It enables the website developers to realize various new possibilities in website design and can be held responsible for a major part of the Web 2.0 hype. This new AJAX supported model was made possible mainly through the spread of broad- band and DSL connectivity to the internet to handle the transfer of high amounts of data.37

AJAX itself is not a totally new technical standard but rather a mash up of already exist- ing internet standards. Through the combination of wide spread technical standards, AJAX offers totally new features. Internet technologies merged to AJAX are for exam- ple Extensible Hypertext Markup Language (XHTML), Cascading Style Sheets (CSS), Distributed Component Object Model (DCOM) and Javascript. While XHTML is used for describing data and objects, the CSS are responsible for formatting; DCOM is the communication protocol for the internet connection and JavaScript finally for the dynamic nature of the website, for reaction to the user’s action on the site.38

Even though the technology behind AJAX seems to be highly complex, the end user cannot tell the difference at first sight, whether AJAX is used for the internet page or not. Basically AJAX is a non-invasive technique concerning the graphical display of content. The difference between AJAX and standard websites is that all scalable win- dows of the GUI are embedded into JavaScript frames. This feature enables the browser to reload and refresh the displayed content frame by frame instead of reloading the whole page.39 The benefit for the user is the emulation of a real-time application via the internet while a constant refresh of the content is undertaken in the background of the application. This allows real-time interactivity comparable to Macromedia Flash content but exceeds this technique, because opposing AJAX, Flash content does not offer direct manipulation of the page content by the user.40 Figure 4 shows the schematic model for AJAX supported applications compared to classical Web applications.

Figure 4: The traditional model for Web applications compared to the Ajax model

illustration not visible in this excerpt

One of the key elements for successful AJAX development and implementation is the use of JavaScript Remote Scripting. JSRS handles all dynamic queries emerging from the users operations to the server and enables AJAX to emulate real-time graphical in- terfaces via the browser.41 This is one of the most striking steps towards an internet as a platform as postulated by O’Reilly and may, over the long run, substitute traditional locally installed desktop applications such as the well-known Microsoft programs. Therefore Google may become the main competitor for Microsoft’s standard software packages.42 Google was further responsible for the first important impact of AJAX on the internet community through the deployment of AJAX as the key technology behind the geospatial software43 Google Earth. The Google Earth client44 allows real time browsing through highly detailed and globally synchronized satellite pictures all over the world.

Finally, complex technologies like AJAX are not developed as an end in themselves but are just the enabler for the crucial part of internet applications: user friendly display and usability of content. AJAX has done a big deal in improving this usability and enabling more dynamic use of the internet.

3.2.2 XML

The Extensible Markup Language (XML) is a vital part of the Web 2.0 architecture. XML allows improved functionality for Web content through a more accurate, flexible and adaptable standard compared to HTML, the common internet document format.45 XML is considered extensible, unlike HTML which is a predefined markup language, because it functions as a meta language. XML offers possibilities for Web designers to built their own specific markup language according to individual needs. The XML stan- dard is derived from the international standard meta language for text document markup (SGML), and so is HTML.46 The adaptability of HTML is highly limited, because be- side plug ins available, there was no browser compatibility to other variations of SGML derived markup languages. This hindered the development of new internet applications.

XML requires the developer to regard highly standardized rules, but on the other hand allows modification and reduces the complexity of bare SGML.47

In general XML use for Web documents is not that common yet. However especially in the context of Web 2.0 XML based XHTML format is widely deployed. Besides the use of XML as HTML extension, it offers a variety of different uses. The following list is based on the XML FAQ by Peter Flynn:48

- Information identification: the feature of defining your own markups allows to define meaningful names for individual information items
- Information storage: the portable, non-proprietary architecture based on interna- tional standards allows storage, access and procession as data format across any platform.
- Information structure: it can be used to store and identify any kind of (hierarchi- cal) information structure (long, deep, or complex document sets or data sources). It is used for back-end information-management to serving the Web. This Web application, combined with a transformation system to serve it as HTML, it is the most common use of XML.
- Publishing: By combining information identity, storage and structure, it is possi- ble to realize advanced document management and control. XML allows publishing to the Web (as HTML) as well as to paper (as PDF) and to other formats by using a single source document, using style sheets like CSS.49
- Messaging and data transfer: another important feature of the XML standard is used to manage inter-operability between different computing systems. By providing a standardized language for data identity and structure, information can be encapsulated into XML to allow inter-system and inter-process communication. This is called messaging.
- Web services: Finally the combination of all features mentioned above builds the frame for data exchange, which can be processed by machines opposing HTML, which was only comprehensible by humans. It is used for weather ser- vices, e-commerce sites, blog newsfeeds, AJAX sites, and other data-exchange services using XML for data management and transmission and the Web browser with XHTML and CSS for display and interaction.

While AJAX deals mainly with front-end appearance of Web 2.0 applications, XML is the key markup language to manage documents and data. The XML standard has enormous potential to function as the interface between the AJAX supported front-end and the SPARQL supported database back-end of applications.

3.2.3 API

Application programming interfaces (APIs) offer a certain degree of inter-operability for different Web applications through wrapping the content into compatible and open APIs. APIs are based on service oriented architecture principles. APIs allow the ex- change and access of data stores between independent applications. APIs also permit the creation of so called mash ups, the remix of data from different sources to built a new application.50

Currently, the two most common API standards are Simple Object Access Protocol (SOAP) and Representational State Transfer (REST). SOAP comes with a higher de- gree of standardization compared to REST and is generally used to handle a higher number of programming interface requirements. It is used to interconnect business sys- tems of a larger scale. REST on the other hand is the simpler concept of requesting XML structured content via document HTTP requests.51 Sometimes both concepts are used simultaneously to serve different groups of data request. For example Ama- zon.com, which uses SOAP architecture on a big scale system to link business partners such as Toys’r’Us to the enterprise database, REST is used to provide the product cata- logue and data such as customer reviews to any site offering Amazon.com online orders directly.

[...]


1 Cf. Wikipedia (2006g).

2 Cf. Voelker (n.d.), p.3.

3 Graham (2005).

4 Cf. Graham (2005).

5 O´Reilly (2005b).

6 Cf. O´Reilly (2005a), p.4.

7 Cf. O´Reilly (2005a), p.4.

8 Cf. O´Reilly (2005a).

9 Cf. Wikipedia (2006b).

10 Cf. Graham (2005).

11 Cf. ProgrammableWeb (2006).

12 Wikipedia (2006c).

13 O´Reilly (2005a), p.2.

14 ProgrammableWeb (2006).

15 A mash up is a Web page or application that combines data from two or more external online sources. The external sources are typically other Web sites and their data.

16 URL: http://www.uie.com/.

17 Cf. Spool (2005).

18 Cf. O´Reilly (2005a), p.4.

19 URL: http://www.gmail.com/.

20 Cf. SAP (2006).

21 O´Reilly (2005a), p.4.

22 Raymond (2000).

23 Cf. Shklovski/Kraut/Rainie (2004).

24 Please refer to Figure 2.

25 Cf. O’Reilly, Tim (2005c).

26 Cf. WC3 (n.d.).

27 Also referred to as metadata

28 Cf. Hansen/Neumann (2005b), p.508.

29 Cf. Berners-Lee/Hendler/Lassila (2001), p.1.

30 Resource Description Framework.

31 Cf. Hansen/Neumann (2005a), p.1049.

32 Cf. Hansen/Neumann (2005a), p.1050f.

33 Cf. Berners-Lee/Hendler/Lassila (2001).

34 Ontologies resemble faceted taxonomies but use richer semantic relationships among terms and attrib- utes, as well as strict rules about how to specify terms and relationships. Because ontologies do more than just control a vocabulary, they are thought of as knowledge representation.

35 Cf. McGuiness/Harmelen (2004).

36 Cf. Oberle/Staab/Eberhart (2005).

37 Cf. Teare (2005).

38 Cf. Garrett (2005).

39 Cf. Teare (2005).

40 Cf. Walker, Joe (2006).

41 Cf. Ashley, Brent (2005).

42 Cf. Donoghue (2006).

43 Geospatial software is used to gather, store, process, and deliver geographical information.

44 URL: http://earth.google.com/.

45 Cf. Flynn (2006).

46 Cf. Flynn (2006).

47 W3C (2006).

48 Cf. Flynn (2006).

49 CSS is the acronym for Cascading Style Sheet, a common standard to set layout formats.

50 Cf. MacManus, Richard (2005).

51 Cf. Asaravala (2002).

Final del extracto de 76 páginas

Detalles

Título
Web 2.0 in a Bigger Context – Social and Macro-economical Implications
Universidad
European Business School - International University Schloß Reichartshausen Oestrich-Winkel
Calificación
1,6
Autor
Año
2006
Páginas
76
No. de catálogo
V73982
ISBN (Ebook)
9783638686204
Tamaño de fichero
1066 KB
Idioma
Inglés
Notas
Recently, a very prominent addition to the internet related vocabulary has been introduced: the Web 2.0. The term was originally coined by Dale Dougherty of O'Reilly Media in summer 2004 and emerged while brainstorming on a name for an innovative internet conference. The emergence of Web 2.0 as an important novum to the internet community gets obvious to the user just by browsing the internet. The term seems to have become a ubiquitous buzzword. Unfortunately the term has been hyped without...
Palabras clave
Bigger, Context, Social, Macro-economical, Implications
Citar trabajo
Dipl.-Kfm. Christian Laase (Autor), 2006, Web 2.0 in a Bigger Context – Social and Macro-economical Implications, Múnich, GRIN Verlag, https://www.grin.com/document/73982

Comentarios

  • No hay comentarios todavía.
Leer eBook
Título: Web 2.0 in a Bigger Context – Social and Macro-economical Implications



Cargar textos

Sus trabajos académicos / tesis:

- Publicación como eBook y libro impreso
- Honorarios altos para las ventas
- Totalmente gratuito y con ISBN
- Le llevará solo 5 minutos
- Cada trabajo encuentra lectores

Así es como funciona