Business Process on‐Demand; Studying the Enterprise Cloud Computing and its Role in Green IT


Masterarbeit, 2011

170 Seiten, Note: 1


Leseprobe


Contents

CHAPTER 1 - INTRODUCTION
1.1. STRUCTURE OF THE THESIS

CHAPTER 2 - CLOUD COMPUTING FUNDAMENTALS
2.1. HISTORY
2.2. DEFINITION
2.3. KEY CHARACTERISTICS OF CLOUD COMPUTING
2.4. CLOUD DELIVERY MODEL (CLOUD SERVICES)
2.5. CLOUD DEPLOYMENT MODEL (CLOUD TYPES)
2.5.1. Private Cloud (Internal Cloud or Corporate Cloud):
2.5.2. Public cloud or external cloud:
2.6. CLOUD TECHNOLOGIES
2.6.1. VIRTUALIZATION
2.6.2. SERVICE ORIENTED ARCHITECTURE AND WEB SERVICES
2.6.3. MASHUP AND WEB 2.0

CHAPTER 3 - GREENING THE IT AND ENVIRONMENTAL ISSUES
3.1. GREEN IT
3.1.1. ENERGY EFFICIENCY IN DATA CENTERS
3.1.2. HOW ENERGY WILL BE USED IN A DATA CENTER
3.2. MEASURING DATA CENTER EFFICIENCY
3.2.1. SPEC STANDARD
3.2.1.1. SPEC FOR VIRTUAL SERVERS
3.2.2. EPA METRICS
3.2.3. LEED RATING SYSTEM
3.2.4. GREEN GRID POWER EFFICIENCY METRICS
3.2.5. POWER USAGE EFFECTIVENESS (PUE)
3.2.6. DATA CENTER PRODUCTIVITY (DCP)
3.3. MODULARITY OF DATA CENTERS
3.4. ENVIRONMENTAL ISSUES
3.5. REGULATIONS AROUND THE WORLD
3.5.1. KYOTO PROTOCOL
3.6. CLOUD COMPUTING

CHAPTER 4 - ORGANIZATIONS AS SYSTEMS
4.1 BUSINESS FUNCTIONS
4.1.1 BUSINESS FUNCTIONS ONTOLOGY
4.2. BUSINESS PROCESSES
4.5. BUSINESS PROCESS MANAGEMENT
4.6. BUSINESS PROCESS MANAGEMENT SYSTEMS (BPMS)
4.7. BUSINESS PROCESS DISCOVERY
4.8. BUSINESS PROCESS MANAGEMENT IN THE CLOUD
4.9. BUSINESS PROCESS CLASIFICATION FRAMEWORKS
4.10. INFORMATION SYSTEMS IN AN ORGANIZATION
4.10.1 Office Automation (OA) and Personal Information Management Systems (PIM)
4.10.2. Transaction Processing Systems (TPS)
4.10.3. Functional Area Information Systems (FAIS)
4.10.4. Enterprise and Inter-organizational Information Systems (IOS)

CHAPTER 5 - COMPREHENSIVE MARKET ANALYSIS

CHAPTER 6 - CLOUD ONTOLOGY
6.1. WEB ONTOLOGY LANGUAGE (OWL)
6.2. PROTÉGÉ- A SEMANTIC EDITOR
6.3. CLOUD ONTOLOGY
6.3.1 DEFINING CLASSES
6.3.2 OBJECT PROPERTIES
6.4. SEMANTIC REASONER

CHAPTER 7 - MOVING INTO THE CLOUD
7.1. CONSIDERATIONS BEFORE MOVING INTO THE CLOUD
7.1.1. CLOUD COMPUTING BENEFITS
7.1.2. CLOUD COMPUTING CHALLENGES AND RISKS
7.1.3. SECURITY ISSUES IN CLOUD COMPUTING
7.1.3.1. SECURITY ADVANTAGES
7.1.3.2. SECURITY CHALLENGES
7.1.4. INTEGRATION WITH LAGACY SYSTEMS
7.2. CLOUD ADOPTION DECISION MAKING
7.2.1 CLOUD DECISION AND AHP

CHAPTER 8 - EVALUATION OF THE WORK 115
8.1. FIRST USE CASE
8.2. SECOND USE CASE

CHAPTER 9 - CONCLUSIONS

CHAPTER 10 - APPENDIXES
10.1. APPENDIX
10.2. APPENDIX

REFERENCES

INTRODUCTION

Rapid growth of businesses and the vital need for using Information Technology as a tool for monitoring, improving and optimizing business processes led many enterprises into moving towards using appropriate and effective Information Technology solutions such as Enterprise Resource Planing (ERP), Customer Relationship Management (CRM) or Supply Chain Management (SCM) systems.

Presently there are several cloud service providers available around the world that provide their customers with different computing services starting from simple email services like “Google Mail” or social networks like “Facebook” to more complicated Anything-as-a- Service (XaaS) solutions such as Customer Relationship management (CRM) by salesforce.com or even real-time image processing services for spatial data processing or MRI brain scanning[2] [3].

Despite of all the benefits that would result from cloud migration, executing this transfer is surely no easy task for most organizations. Legacy application problems, security and sensibility of data (loss of confidential data during the transfer) and complexity of services available from different cloud providers are among the challenges that each organization might face[3].Broad and varying needs of each organization and only limited services from the respective cloud providers make this move a very difficult and stressful process for many companies and organizations. Hence there is a vital need for a comprehensive study that focuses on the available services of various cloud providers and provides a mapping solution between business processes and available cloud services on the market. This is done through a comprehensive market analysis and its results are used as the basis for developing the subsequent the cloud ontology.

By developing an ontology, organizations will be able to understand and explore available cloud services in a meaningful way. The cloud ontology can significantly reduce time and expenses that occur in the process of finding the appropriate cloud services.

The main contribution of this thesis is to find adequate answer to this question: How is it possible to map business processes and functions to appropriate cloud services available on the market?

1.1. STRUCTURE OF THE THESIS

Several issues are studied in accordance to the objectives of this thesis. In general, this thesis tries to find an appropriate solution for mapping business processes to available cloud services on the market.

First, the concept of Cloud Computing including its history, definition, key characteristics and different deployment models is explained. Then, a brief study on cloud technologies including virtualization, Service Oriented Architecture (SOA), Web 2.0 and Mashup technologies is provided.

In the chapter on greening the IT and environmental issues, the energy usage and efficiency in data centers is studied. In addition, different available standards regarding energy efficiency in data centers will be explained. SPEC, EPA, LEED rating systems and Green Grid power efficiency metrics are among the standards under scrutiny. In addition to metrics, the modularity of data centers are also highlighted. In the section on environmental issues, the reduction of energy consumption and CO2/GHG[1] emissions in datacenters and servers is analysed. Different regulations regarding energy efficiency and global warming including KYOTO protocol and finally, the role of cloud computing in reduction of green house gasses (GHG) and carbon foot print will be studied.

In next chapter, in the section on organizations as systems, the concept of business processes and functions including their available frameworks are looked into and Information Systems (IS) required by organizations highlighted. After explaining basic definitions of business processes and information systems, I carry out a comprehensive market analysis and the results are presented in form of two different matrixes: one for Business processes and functions and the other for PaaS, IaaS and other available cloud services in the market.

After studying the matrix, the concept of ontology for cloud computing is explained. Benefits that an ontology might have, using a semantic Reasoner and also the proper representation of knowledge will be highlighted.

The last section of this thesis tries to present different migration decision making strategies from various available sources. In this section after studying the challenges and benefits of cloud computing for enterprises, some available decision making strategies like Balance Score Card (BSC) and AHP assessment method[2] for the evaluation of candidate applications for migration into the cloud are assessed.

Finally, two imaginary companies are selected and here it is shown how the ontology would be beneficial for their specific requirements by using SPARQL[3] query language and finding appropriate Cloud services for the company.

2 CLOUD COMPUTING FUNDAMENTALS

Cloud computing became a hype term in recent years. Cloud computing is a new term based on mixture of existing technologies like grid computing, virtualization, Service Oriented Architecture (SOA) and Web 2.0 [4].

The possibility of sharing computing resources and also low cost advantages made cloud computing a popular term in recent years. Gartner in 2010 announced cloud computing among social networks, web Mashups and multi-core/hybrid processors as a top 10 available strategic technology[5].

The cloud computing name is principally a metaphor for the Internet. The shape of the cloud has been used in different resources before for presenting the internet connectivity in network diagrams[4]. Cloud computing is a technology which uses internet as a tool for delivering different cloud services and computing recourses (Hardware, programming platforms or Software) to the customer[4]. The only thing that a customer needs is a web browser and connectivity to the Internet.

Diagram below presents the Google trends regarding some top search citations including cloud computing, Grid Computing, Socialnetworks, Web 2.0 and SOA[6].

illustration not visible in this excerpt

Figure 1 - Google trends for Cloud Computing (Source: Google trends, last viewed on 05.05.2011)

As it is presented in Google trends, the cloud Computing has became a matter of interest specially after 2007 when the IBM announced its cloud Computing initiative and research directions[7].

Despite traditional systems were computing services are available by using desktops or notebooks running bunch of software installed on them, new technologies like cloud computing made it possible for both private and public sector to access to data storage, software and computing services all through the internet and remote servers (by using a simple web browser)[8].

In simple words “cloud computing is all services, applications and data storage delivered online by internet through powerful servers”[9].

2.1. HISTORY

As explained before, the cloud computing is not a new concept rather it is an evolution of existing technologies. In other words the cloud computing could be known as a new name for what already exists.

The first idea regarding cloud computing is probably related to a speech of John Mc Carthy (an American computer scientist) in 1960s, in which he indicated that the computation may one day be organized as a public utility[10]. This could be known as the concept of utility computing in which the computing services can be used based on their demand like water, gas, electricity and telephone services and customers will be billed according to their actual usage of the service [11].In this form Customers does not need to be worry of the background of the service they receive however the utility computing could be mostly recognized as a Business model instead of a technology[12].After this, the concept of Grid Computing represented the idea of using computer resources as easy as getting access to electricity using electric power grid[10]. The idea behind Grid Computing in 1990s was to getting access to computing resources from multiple domains to reach a common goal. In Grid Computing, Computers can be able to do homogeneous tasks by using a so called super virtual computer which is a combination of several computer networks working together[12]. Grid Computing provides some form of virtualization but because of the failure problem of the grid in case of failure of a single location, it could be recognized as a weaker version of cloud computing[12].

In Grid Computing resources are autonomous and distributed and are owned by several organizations but in cloud computing the resources are in ownership of a commercial cloud provider and is under central control of the provider[13]. Grid Computing and also cloud are both scalable and this will be done by proper load balancing of application instances running on different operation systems which are connected using web services[14]. Both Grid and cloud computing are using multitenancy and this means different users could be able to do different various tasks by only accessing to a single application instance[14]. Both Grid and cloud are providing customers with Service Level Agreements (SLA) [14]. The cloud computing term might be probably originated from the cloud diagrams in different resources for presenting the Internet connectivity. This was majorly used by telecommunication companies in 1990s which were moving towards Virtual Private Networks (VPN) services from the point to point data circuits[10].

illustration not visible in this excerpt

Figure 2 - Internet VPN Diagram, using cloud sign for presenting internet connectivity (Source: Wikipedia English[4] )

In recent years several companies started to provide cloud services, starting from Salesforce.com in 1999, Amazon in 2002 and Google in 2006, etc[10].

2.2. DEFINITION

There are different definitions and explanations available for the cloud, among them the more accepted definitions are given by Ian Foster[5] or Jeff Kaplan&Reuven Cohen[6] [15, 16].

In 2009 Luis M. Vaquero, L. R.Merino, J. Caceres and Maik Lindner examined 20 different available definitions of the cloud and finally defined the cloud as:

A large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically re-configured to adjust to a variable load (scale), allowing also for optimum resource utilization. This pool of resources is typically exploited by a pay-per-use model in which guarantees are offered by the Infrastructure Provider by means of customized Service Level Agreements[7] (SLAs)”[17].

However among these definitions, the clearest definition is provided by National Institute of Standards and Technology. NIST defines the cloud as:“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”[18]. According to this definition cloud computing consists of five key characteristics, three cloud delivery and four deployment models[19].

2.3. KEY CHARACTERISTICS OF CLOUD COMPUTING

According to NIST, On-demand self-service, Broad-network access, Location-independent resource pooling, Rapid elasticity and Measured Service are presenting these five characteristic features of cloud computing.

2.3.1. On-demand self-service gives the consumers the availability of benefiting the computing services like server time and network storage automatically and depend on the need of customer from service provider without any human interaction.

2.3.2. Broad network access let consumers to use the service over the cloud with any connectable device like mobile phones, desktops, notebooks or even PDAs. (Thin and thick clients)

2.3.3. Location-independent resource pooling gives consumers the ability to use provider’s computing resources based on their individual needs dynamically and independent of the location of the resources. Users can have their required resources until they need it or use it. For this purpose cloud providers uses a multi-tenant[8] model and users have no control on where the resources are located. Users might be able to ask for the location of the resources in a higher level of abstraction[19].

2.3.4. Rapid elasticity provides consumer with rapid and elastic capabilities and in special cases automatically. This feature enables consumers to rapidly scale out or (in case of need) release and scale in. These capabilities are unlimited and users have the possibility to purchase them in any quantity regarding their demand.

2.3.5. Measured Service or Pay-per-use means that the cloud systems have the ability to meter the consumed services by each consumer and charge them individually for the service they used (fee-for-service). Therefore it is possible to automatically optimize and control services which will be used by consumers[19]. In other words resources can be monitored, controlled and reported in transparency both for consumers and also providers[19]. This could also include advertising based billing model. Facebook or Flickr are examples of this model which earn money through providing advertisements[20].

2.4. CLOUD DELIVERY MODEL (CLOUD SERVICES)

When we talk about cloud theoretically we mean combination of systems that bring IT resources to the remote users as a service[8].

According to NIST Computing within a cloud (cloud services) could be delivered in three different forms[19].

2.4.1. Software-as-a-service (SaaS) in which consumer has the ability of using specific software already installed on provider’s cloud. It is accessible from everywhere by using various device clients using a web browser. In this form consumer does not care about the infrastructure under the cloud. The salesforce.com CRM and Google Mail are among such services.

2.4.2. Platform-as-a-service (PaaS) provides consumer with a collection of applications, programming languages like java, .Net or python and some user tools[9]. Consumer does not care about installations, hardware, OS, servers or other infrastructures. However the user has the ability to develop applications or control the application host environment configurations[19]. For example salesforfe ’ s force.com platform let developers to add services for special purposes and extend the initial services of salesforce.com. Another example is Cordys Business Operations Platform (BOP) that provides the possibility of extending the capabilities of full BPM lifecycle. Google App Engine and Zoho platform and Microsoft Azure are other examples of available PaaS services.

2.4.3. Infrastructure-as-a-service (IaaS) let consumer to deploy and run the optional software or different operation systems on the cloud and the customer has also provision on the computing resources of the cloud. Here the costumer has no worries about the infrastructure of the cloud but has the ability to control the applications, operating systems data storage and other components like firewalls or loading balancers[19]. Amazon EC2 und S3 are sample available IaaS services.

illustration not visible in this excerpt

Figure 3 - NIST visual model of Cloud Computing (Source: Cloud Security Alliance[9] 2009, page 14)

2.5. CLOUD DEPLOYMENT MODEL (CLOUD TYPES)

Deployment of the cloud depends on the form of it. NIST defines four deployment models for a cloud[19].

2.5.1. Private Cloud (Internal Cloud or Corporate Cloud):

Private clouds are clouds which are dedicated to a specific company or organization[9]. This means the cloud infrastructure (network or data center) will be used just for a specific business under the enterprise firewall[8].

This kind of cloud is owned privately and will be done mostly by large companies or governmental organizations and has restricted access however the hosting could be also external[21] Governmental agencies or large companies prefer this kind of cloud computing because of more controllability and secure environment[8].

2.5.2. Public cloud or external cloud:

Public cloud is a cloud dedicated to the large number of organizations or clients and simply a cloud for general public[9].This kind of cloud is the most common form of the cloud computing[4].The owner of public cloud is a provider who sells cloud services[19]. Public cloud service provider provides consumers with different cloud services using the pay-per-use (Pay-as-you-go) service[21]. Public clouds provide services in three different forms: Public clouds, which provide Software-as-a-service, platform-as-a-service, and infrastructure-as-a- service.

Salesforce.com and Intuit Quickbooks Online[10] are among the public Software-as-a-service providers; Google App Engine[11], Force.com and Windows Azure are examples of Public platform-as-a-service provider and the Amazon web services or IBM Cloud are examples of public infrastructure-as-a-service provider[21]. According to H.Jin et al. (2009) security and data governance are among the most important issues in this form of cloud computing[8].

illustration not visible in this excerpt

Tabel 1 - Public Cloud vs. Private Cloud (Handbook of Cloud Computing P.338 - 2010)

2.5.3. Hybrid cloud is a mixture of private, public or community clouds[4]. In the Hybrid Cloud the company will keep the important data and applications of the company under its owned firewall and put the less important one on the public cloud[8]. Eucalyptus and GoGrid are examples of Hybrid cloud service providers[4].

2.5.4. Community cloud is simply the possible variants of other three forms of the cloud[21]. This cloud will be shared by a number of companies or organizations with close or shared concerns. In other words, community cloud will be used mostly by organizations which have common interests and concerns like certain mission or security requirements or, etc.[19]. The mentality of community cloud is actually coming from the Grid Computing and Volunteer Computing[12]. In this form of cloud, companies with similar requirements will share an infrastructure and this make it possible to share the price and relatively increase the scale of the company[8]. Community Cloud could also be done by distributing the server functionality among number of user machines and make a kind of virtual data center from their unused (underutilized) resources[22]. In this model the costs will be shared with less amount of users in comparison to a Public cloud and this is the reason that such cloud is more expensive than a Public cloud[23]. An example of this form of cloud is Google’s “Gov Cloud”[24].

2.6. CLOUD TECHNOLOGIES

Cloud Computing is based on existing technologies like Virtualization, Service Oriented Architecture (SOA), Web services, Mashups, Workflows and service flows[4].

2.6.1. VIRTUALIZATION

The server utilization problem in organizations is an important issue. In cloud, servers have the ability to be shared or virtualized. Cloud computing has the advantage of using virtualization technology which let resources being virtualized among different applications and this helps to reduce the amount of servers needed[4]. Diagram below presents differences between virtualized and non-virtualized servers. In the cloud, servers will be virtualized or shared; this makes it possible to run several applications and operating systems on a sample server.

illustration not visible in this excerpt

Figure 4 - Virtualized and non-virtualized servers (Source: IBM developer works[13] )

2.6.2. SERVICE ORIENTED ARCHITECTURE AND WEB SERVICES

Service Oriented Architecture (SOA) and Web services are among the basic technologies used in cloud computing. Web services will be used for developing cloud services and they are defined as available communication methods between different electronic devices in a network[4, 25]. Web services are based on industry standards like SOAP, WSDL and UDDI[4]. The Service Oriented Architecture (SOA) provides the availability of organizing and managing the Web services inside the cloud[4].

2.6.3. MASHUP AND WEB 2.0

Web 2.0 is a concept related to web applications that facilitate the information sharing, Collaboration and interoperability on the World Wide Web[26]. The web 2.0 concept gives the possibility of collaboration between different users in a social media form in contrast to normal passive web sites in which users are only able to view the web pages in a passive form but not being able to interact or giving their desired or own comments[26]. Web 2.0 is a technology and a way of web designing which increases the information sharing and collaboration between users[27].

In addition to Web 2.0, Mashup is a web page or application that uses and facilitates combination of data from several sources into a single integrated service or application[4]. A possible Mashup example could be the mashing of sample addresses from a database into mapping services like Google maps[4]. Combination, Visualization and aggregation are among the most important characteristics of the mashup services[28].

Mashups could also happen between several clouds. This happens by combination of different cloud computing services or in other words “Cross-Cloud mashups”. For example Apprio developed an App which makes the cross-cloud mashup between Facebook and Salesforce.com possible[29].

3 GREENING THE IT AND ENVIRONMENTAL ISSUES

Information Technology is playing an important role in business success of modern organizations. IT optimizes businesses by providing better tools and infrastructure. Companies try to benefit from IT in order to increase their productivity, quality of service or even managing their organizational issues.

Reducing costs and having the maximum profit, forced many companies to look forward for better and affordable solutions for their business. Data Centers are major energy consumers in any organization and that’s why reducing the energy consumption in data centers will be beneficial for organizations regarding economies of scale.

It is important to notice that servers and data centers are the most important infrastructure part of the IT solution companies. Some of these companies own more than several thousands of servers all around the world. Google, Amazon, Microsoft, eBay, Yahoo! And Intel are among the companies which owns the most servers around the world. As an example it is estimated that the number of Google’s servers are even more than 450,000 servers[14] and Microsoft with 470,000 servers is among the pioneers in this field. It is estimated that Amazon had invested more than 40,000 servers just for running Amazon Web Services EC2[30, 31].

Cloud computing technology is principally based on virtualized servers in form of data centers. Many cloud providers like Amazon, Google or yahoo! have been investing on their mega large data centers with several thousands of servers. That’s why the energy efficiency in data centers is a very important issue for these cloud providers.

Google has several data centers all around the world. These Servers are located in different geographical areas and continents. According to Data Center Knowledge[15] in 2008, Google has servers in 19 different locations in US, 12 in Europe, 3 server field in Asia and one server field in Russia and south America each[32]. California, Virginia, Georgia, Berkeley, Council Bluffs, Lenoir, Mayes County, Dalles, Ireland, Oregon, Belgium, Netherland, Sydney Australia and recently in 2009 Finland are among these places[33]. It is expected that Microsoft and Yahoo! have servers in Quincy, Washington and Amazon at Dalles[3].

illustration not visible in this excerpt

Figure 5 - Google server Locations Worldwide (Source:

http://www.wayfaring.com/maps/show/48030)

There have been several researches done to study if the cloud computing is efficient regarding energy consumption and environmental issues. In this chapter the concept of Green IT and energy efficiency in data centers will be studied. In addition, issues like measuring data center efficiency, available metrics also modularity of data centers is explained. Finally the environmental issues regarding CO2 emissions and available regulations regarding Green IT is studied.

3.1. GREEN IT

Beside the benefits that deploying IT can bring for an organization, there are several unintended side effects that impacts our environmental life. Being aware of the possible negative side effects of using IT in business led several organizations move towards finding a solution for such problem.

Greening the IT is actually using the Information Technology in a more efficient way, which causes reduction in energy consumption and also CO2[16] emissions. In simple words Green IT means providing energy-efficient IT solutions[33].

3.1.1. ENERGY EFFICIENCY IN DATA CENTERS

Moving towards Green IT is an ideal way for organizations to reduce their energy consumption and also Carbon footprint[17]. John Lamb in his book (The greening of IT[18] ) indicates that by upgrading the hardware infrastructure of the company to more energy efficient technologies such as virtual servers, networks or data storage every three to four years, companies could be able to reduce their energy consumption by up to 50 percent[33]. According to Lamb, by using virtualization technology, equipments and system management, costs of an organization will significantly reduce [33]. Lamb explains that many large organizations are now moving towards green computing and IT virtualization.

Without any doubt, a major concern in Green IT is the Data centers[19]. According to EPA Report on Server and Data Center Energy Efficiency in 2007, the US energy consumption of Servers and data centers has been doubled between 2000 to 2006 and this is supposed to be doubled again as by 2011 to more than 100 billion KWH [34].Regarding this increase in energy consumption, EPA mandated federal governmental agencies to work on strategies that help to reduce this energy consumption in data centers at least till 20 percent by 2011[35].

Bio Intelligence service reasearch in 2005 shows that at least 14 % of whole European Union electricity consumption in ICT branch belongs to Servers and data centers. This survey also shows that the communications networks and its equipments are using up to 18 percent of the electricity usage in IT area [36]. Europe alone is using one quarter of the whole electricity usage in data centers in the world [37]. Using cloud computing could significantly reduce the energy consumption in data centers of different organizations. This will be done by reducing the organizational needs to own private data centers and infrastructure. Cloud computing enables organizations to share cloud data centers with less cost.

Jonathan G Koomey in 2008 studied the energy consumption in data centers around the world. His report shows that the electricity usage for data centers doubled worldwide from 2000 to 2005 and the amount of energy consumption in servers and data centers where about one percent of the whole electricity consumption in the world. This research shows that the annual growth rate of electricity usage in datacenters around the world is around 16.7% and just 80% of this growth is representing the amount of electricity usage in servers[37].It is expected that this growth rate will instantly increase in the next years [38].

illustration not visible in this excerpt

Figure 6 - Average annual percentage growth rates in data center electricity use by major world region, 2000-2005. (Source: Worldwide electricity used in data centers - Koomey G. 2008)

illustration not visible in this excerpt

Figure 7 - Average annual percentage growth rates in data center electricity use by major world region, 2000- 2005. (Source: Worldwide electricity used in data centers - Koomey G. 2008)

Gartner declares in 2007 that Information Technology alone produces 2% of global emission, which is equal to the Airline industry emissions alone. There is also another report (SMART 2020) comes to the same conclusion and expresses that IT alone has the great potential of decreasing the CO2 emissions of other sectors between 15 to 20% by 2020[39, 40].

illustration not visible in this excerpt

Figure 8 - Composition of Data center Footprint 2002 vs.2020 (Source: SMART 2020 report-page 21)

3.1.2. HOW ENERGY WILL BE USED IN A DATA CENTER

Space, Power and cooling capacities are three important issues concerning data centers in an organization. Green IT will try to bring more efficiency to data centers by optimizing these three main issues in a data center. The cost of energy is always increasing and that’s why companies should invest a lot on energy consumption of data centers. By moving into the cloud, organizations can benefit a lot regarding energy usage and the “Pay-per-use” model of the cloud computing. Using this model gives the possibility to an Organization to pay only the services they use which also covers the energy or infrastructure costs (Pay-per- Use). In concern of energy reduction in a data center, it should be clear where and how the energy is consumed in a data center. Figure on the right presents the electrical power flow[20] in a data center with efficiency rate of 30%[1]. This 30% refers to Data Center infrastructure Efficiency (DCiE) and its relative PUE is around 3,4 (This will be explained later in this Figure 9 - Electrical power flow in a sample data center with efficiency rate of 30% (source: APC White paper#114 [1])

In a normal Data center around half or less of the electricity will be used for IT equipments[1]. The rest of electricity will go to Data center physical infrastructure including Power equipment, Cooling, air conditioning and light[1]. Diagram below presents a more efficient Data center with 47% efficiency (PUE=2.13).

illustration not visible in this excerpt

Figure 10 - Electrical power flow in a sample data center with efficiency rate of 47% (Source:[1])

It should be noticed that all energy used in data center will be spreat out into atmosphere in form of heat. In next section, different metrics for energy efficiency in data centers will be studied.

3.2. MEASURING DATA CENTER EFFICIENCY

One of the major strategies to increase the energy efficiency of data centers is continuous monitoring and measuring the existing infrastructure of the organization [33]. Green data center could be obtained as a pre-built new energy efficient device or it could be achieved by modifying existing traditional data centers. Some studies shows, the better Solution is to buy a new energy efficient (green) data center from the beginning instead of modifying existing traditional data centers[33]. Normally the best time to upgrade to new energy efficient data centers is three to four years, which is considered as the refreshing data center lifecycle.

For measuring the energy efficiency in data centers we need standards. There are some metrics like Power Usage Effectiveness (PUE) or Data Center infrastructure Efficiency (DCiE) available but unfortunately these metrics will not cover all aspects of data centers[33]. However there have been and there are several efforts undergoing regarding defining a standard metrics for energy efficiency and green servers. The Standard Performance Evaluation Corporation (SPEC), EPA and EPEAT metrics, ENERGY STAR, Data Center Productivity (DCP), LEED and the Green Grid systems are among the actual standards available in Green IT for energy efficiency measurement. In this section, these standards will be studied in more details.

3.2.1. SPEC STANDARD

The Standard Performance Evaluation Corporation (SPEC) is a non-profit organization which developed a performance benchmarking system for comparison of energy consumption in servers and computers [41]. SPEC studies and reviews its member organizations and publishes their results in form of a benchmarking suite. ADM, Dell, HP, Fujitsu Siemens, IBM, Sun Microsystems and Intel are among the SPEC member companies [42].

3.2.1.1. SPEC FOR VIRTUAL SERVERS

This section of SPEC deals with the virtualization in the datacenters. It tries to measure the performance and the server power of the virtual machines in a data center by providing a performance benchmark [33]. SPECpower_ssj2008 assumed to be the first standard benchmarking from SPEC that studies the power and performance in the servers. One of the issues, which will be studied under SPECpower_ssj2008, is the amount of power consumption of the servers when they are idle. The amount of energy consumed by server when it is doing actually nothing is called active idle. In the example of Fujitsu, the server uses at the highest workload in average 260 watts and 187 watts when it is doing nothing. Studies shows that by using virtualization, the amount of energy consumption will be reduced because by using virtualization the idle time of the server will be reduced [33]. However some data center servers have the possibility of going into sleep mode (standby) to reduce the energy consumption of the server in case of being idle.

illustration not visible in this excerpt

Figure 11 - SPECpower 2008 for Fujitsu-Siemens RX300 S3 (Source: Greeing IT page 112, Figure presents the average power usage in idle and full load of Fujitsu Siemens RX300 S3 with Intel CPU)

3.2.2. EPA METRICS

The ENERGY STAR program was a project started by the Environmental Protection Agency (EPA) in 1992 in the United States to study the energy efficiency in different IT products [33]. After several years, EPA now expanded the ENERGY STAR project and now studies bigger range of IT products in concern of energy efficiency. There have been also some collaboration between SPEC and EPA in order to improve the EPA energy efficiency metrics [33].

3.2.3. LEED RATING SYSTEM

The Leadership in Energy and Environmental Design (LEED) rating system does not directly address the energy efficiency in data centers and focuses on providing certification for energy efficient and sustainable buildings[33]. LEED has been developed by U.S. Green Building Council (USGBC) and is considered to be an international recognized certification for sustainable and green buildings[43]. USGBC will examine the LEED certification through the Green Building Certification Institute (GBCI) which is an institute established in 2008[43].

LEED 2009 certification consists of four different levels: Certified, Silver, Gold and Platinum[43]. The American College Testing Data Center (ACT) and Citigroup Frankfurt Data Center are among the first data centers optaining LEED platinium certificate, the eBay Topaz Data Center owns gold LEED certificate and the U.S. EPA National Computing Center own a silver LEED certificate[44]. It is noticeable that different countries have different building standards. CASBEE from Japan, BREEAM in England and Green Star in Australia are examples of these standards[33].

3.2.4. GREEN GRID POWER EFFICIENCY METRICS

The Green Grid is a consortium of different companies and IT professionals with concentration on increasing the energy efficiency in servers and data centers around the world. Currently active members of Green Grid are: AMD, APC, Dell, EMC, Emerson Network Power, HP, IBM, Intel, Microsoft, Oracle and Symantec[45].

Green Grid offered two important metrics for energy efficiency in data centers, which are: Power Usage Effectiveness (PUE) and also Data Center Efficiency (DCE) metrics.

These metrics help server operators to be able to calculate and estimate the amount of energy efficiency in data centers in a short amount of time. And this is very important because it gives the operators the possibility to understand if there is any need to improve the energy efficiency of current operating data centers[33]. The new version of DCE has changed its name to DCiE in which is a short name of Data Center infrastructure Efficiency[46].

3.2.5. POWER USAGE EFFECTIVENESS (PUE)

Power Usage Effectiveness (PUE) is one of the metrics proposed by Christian Belady from Green Grid. The PUE is the ratio of Total Facility Power (TFP)(Total data center input power) to the data center’s IT load[47].

illustration not visible in this excerpt

Figure 12 - Visual representation of PUE calculation (Source: Rasmussen, N., Electrical Efficiency Measurement for Data Centers, APC, Editor. 2010 [48])

Total Facility Power is the total power dedicated to the data center. This includes everything that supports the IT load (including: Power delivery components (UPS, generators, PDUs, batteries etc.)), the components of the cooling system in a data center (Chillers, CRACs, DX units, pumps and cooling towers) and other components loads as example lighting in the data center[33].

The IT Load or the IT Equipment Power is the total load related to all IT equipments including compute, storage, network, KVM switches, monitor and workstations etc.[33]. In IT load, the load related to supplemental parts of a data center like Keyboard, Mouse, Monitor will also be considered[49]. It is interesting to know that the DCiE is actually the reciprocal of PUE.

illustration not visible in this excerpt

Principally both DCiE and PUE are representing the same issue but can be differently used to calculate the energy consumption in a data center[33]. The value of PUE cloud be in range of 1.0 to infinity in which the PUE value of one (smallest amount) shows a 100 percent energy efficiency. Currently there is no data center which has the PUE of one but it estimated that normally the amount of PUE in different data centers could be around 3.0 or even more, however theoretically it could be possible to reduce the amount of PUE down to 1.6 by a better data center design (for examples Google data centers)[33].

illustration not visible in this excerpt

Figure 13 - As PUE closer to 1 better energy efficiency source (The Green Data center; Steps for the journey 2008)

3.2.6. DATA CENTER PRODUCTIVITY (DCP)

Data center productivity (DCP) is a metrics developed by Green Grid to calculate the productivity in data centers. DCP is generally the ratio of the useful work produces by the data center to the quantity of any resource consumed for producing the work. (Total facility power)[50]. DCP metric is a parallel work to EPA and SPEC regarding data center and servers. It is a developed metric from Green Grid based on PUE and DCiE.

In DCP calculation, the data center is considered as a black box which receives power as input and produces the heat, data goes in and out and the black box is consider to work on the data and produce a useful work[33].

illustration not visible in this excerpt

Useful Work = Sum of [Value of the Task i (not all tasks have the same value) Time Based Utility Function (based on Service Level Agreement_SLA for task i) Absolute Time of Completion for task i ]

In recent years Green Grid has been working on developing the PUE and DCiE metrics by dividing them into different components. The result is presented as below:

illustration not visible in this excerpt

In this equaion the Cooling Load Factor (CLF) presents the amount of energy used by cooling systems. Power Load Factor (PLF) is the amount of energy used by Switches and power supplies (Power Load) and number 1.0 is actually the IT Load Factor with a value of 1.0(Normalized IT Load)[33].

For calculating the PUE and energy efficiency in data centers, different tools will be used. Some of the major tools which will be used for this purpose are Insight Control from HP[21] [51] and also APC tool[1].

3.3. MODULARITY OF DATA CENTERS

Since in an organization data centers are major energy consumers, greening data centers is an important issue for many organizations. However in the concept of Green IT, it is always easier and more effective to build an efficient green datacenter from the beginning instead of improving available infrastructure of a company for more efficient data centers[33]. For this purpose, a cycle of processes need to be done to produce a green and effective data center.

First a comprehensive analysis should be done regarding appropriate infrastructure requirements needed for an organization in future. Next, the modularity of the datacenters should be planned and studied. Power, cooling systems and other infrastructural issues should be considered. After designing the modularity, different tools and software[22] should be used for modeling and designing a data center [33].

illustration not visible in this excerpt

Figure 14 - Sun Modular Data center (Black Box) source: http://it.wikipedia.org/wiki/Sun_Modular_Datacenter

As mentioned before, establishing a green data center is easier if we plan it from the beginning. That’s why buying the latest technology can help a lot to increase the energy efficiency in datacenters.

Since portability and energy efficiency are two important issues in creating green data centers several companies have been working on producing better and greener datacenters. As an example, Sun company has been working on a project called the Sun modular datacenter (Sun MD) or “BlackBox” which indeed is a portable energy efficient data center inside a 20-foot shipping container[47].

Sun MD has been specially designed to reduce the energy consumption and minimize the required space needed for a data center. The mobility advantage and also energy efficiency of such kind of data centers will significantly improve the operating efficiency[47].

In addition to Sun several companies also have been working on green and modular datacenters. IBM Modular data centers used at Bryant University and also Google’s “Data Center in a Box” which has the quick expansion possibility are among these modular data centers[52, 53]. Rackspace which is a public cloud provider also has similar container based modular data center called ICE Cube[33].

The Power Usage Effectiveness (PUE) is an important proof of how these containers based modular data centers are energy efficient. As explained before, an ideal data center will have the PUE rate of 1.0, which is in reality very difficult to achieve. It is interesting to know that Google has its own strategy to improve the efficiency of its data centers. Measuring the PUE, managing the air flow, adjusting the Thermostat, using natural cooling solutions, optimizing the power distribution and buying efficient servers are among the steps Google suggests for improving the energy efficiency in data centers[54].

illustration not visible in this excerpt

Figure 15 - Google’s Data Center efficiency measurement (Source: Google [55])

Table below presents a comparison of major cloud data centers[31].

illustration not visible in this excerpt

Tabel 2 - comparision of major cloud datacenters and their source of energy (source: Green peace cloud computing[31])

Figure below presents a sample model of a cloud data center designed by Bryan Christie. The proposed cloud data center model is based on Automated Storage and Retrieval System (ASRS) [56]. It is supposed that such data centers are basically designed roofless[3].

illustration not visible in this excerpt

Figure 16 - Modular expandable cloud data center model. Bryan Christie Design (source: IEEE Spectrum[57])

3.4. ENVIRONMENTAL ISSUES

In parallel to the electricity usage in data centers and servers, the amount of carbon dioxide (CO2) produced in the data centers and IT infrastructure’s life cycle is very important. The role of Green IT in reducing the greenhouse gas (GHG) emissions is inevitable. CO2 emission and global warming are in focus since several years ago and many governments and also non-governmental organizations are working on regulations to control their carbon and greenhouse gas emission. The major aim of defining such regulations is calculating the carbon emissions regarding their activities and defining strategies to control the global warming by reducing their own carbon footprint.

Reducing the GHG emissions will help reducing the global warming and the environmental changes caused by them. This issue is nowadays an important concern for many governments around the world. The EPA report to the US government indicates that by using energy-efficient data centers, US government could benefit a 20 percent reduction in Carbon-footprint by 2011[58]. In addition in Europe before 2004, european members agreed (based on Kyoto protocol) to reduce the GHG emissions to 8 percent till 2012[58].

illustration not visible in this excerpt

Figure 17 - major Green house Gas emission since 1978 (source:

http://en.wikipedia.org/wiki/Kyoto_Protocol)

illustration not visible in this excerpt

Figure 18 - The global warming as a result of GHG emissions during recent years, (Source: global greenhouse warming [59])

To understand how green a data center is, first the consumed energy must be calculated and then this should be calculated in terms of CO2 emission. A mix or profile of energy sources that have been used for generating electricity will be considered as a source for calculating the CO2 emission for data centers[58].

It is very important to understand that the location of a data center plays an important role in the total amount of carbon footprint of a data center. For example carbon footprint of data centers which are located in areas with more access to hybrid, wind, sun or even nuclear energies (sustainable energies) have less carbon footprints in comparison to datacenters which are more reliable on ground fossil energy[58]. This is because of high amount of CO2 production in energy generation process using fossil fuel or oil. It is important to know that coal and oil have very high rate of carbon content in comparision to natural gas. For example, data centers in a country like France which has 78% of its energy from nuclear resources would have less CO2 emissions in comparision to USA which has 71% of its energy just from oil, gas and coal generators[58].

Iceland could also be recognized as an interesting location for Data centers. This is because of vast renewable energy resources (for example wind) available in this country[60].

It should be noticed that the total CO2 emissions of any data center during its complete product life cycle consists of CO2 emissions in manufacturing, Packaging, transportation, storage, operation and disposal levels of its life cycle. However many CO2 emission calculations are based only on the amount of CO2 produced by a data center in its operational stage of its product life cycle[58].

illustration not visible in this excerpt

Figure 19 - Total CO2 emission of the Data Center Product Life Cycle (Estimating a Data Center's Electrical Carbon Footprint, Dennis Bouley [58])

APC has developed a free web-based tool for calculating the Carbon Footprint in data centers. This tool will show how green a data center could be by converting the energy usage rate into carbon emissions[61].

The APC tool uses Data Center infrastructure efficiency (DCiE), IT load and the Location of the data center as imputs and calculates the CO2 emissions of data center based on these information. Diagram below presents the CO2 emission calculation for two sample scenarios:

illustration not visible in this excerpt

Figure 20 - APC tool for calculating CO2 emissions in data centers based on DCiE, IT load and location of the Data center (Source: [61])

In this example two scenarios will be studied. Senario 1 a data center with Power Usage Effectiveness of 1.9 and the second scenario a data center with a better PUE of 1.3. Both Data centers have the same IT load and they are located in Austria. However it is possible to calculate the IT Load for any indivisual case study by using another APC tool called Data Center Power Sizing Calculator[23].

As it is presented in the results, in scenario 2 (with a better DCiE) a total of 4,542,222 kWh (Kilo Watt per hour) will be saved during the year and despite financial benefits that it has for the organization (499,367 Euro\year), the amont of CO2 will significantly reduce form 3,196 CO2 tonnes per year (footprint) to 2,301 tonnes per year (895 tonnes of CO2 less).

APC has also developed other free web-based tools like Energy Efficiency Calculator tool[24] which recveives power capacity, local electricity rate and the physical infrastructure information like power, cooling and lightening and calculates the energy efficiency and annual electricity cost. There is also another tool (IT energy allocation tool[25] ) that calculates the yearly energy cost per server and also yearly carbon emission per server.

3.5. REGULATIONS AROUND THE WORLD

Environmental issues and climate change led many countries to move towards approving rules and regulations to reduce carbon dioxide emission and controlling the production of greenhouse gas (GHG) in the atmosphere. Among these efforts the Kyoto Protocol from the United Nations Framework Convention on Climate Change (UNFCCC), Bali Road Map (2007)[26] and the COP 15 (Copenhagen Summit)[27] and COP 16 (Cancún, Mexico 2010)[28] are noticeable[62, 63]. In addition, European Code of Conduct on Data Center Energy Efficiency, was another effort from European Union (EU) with a goal of increasing the average datacenter efficiency up to 30 percent[64].

According to second version of the European Code of Conduct on Data Center Energy Efficiency in 2009, at least in year 2007 the energy consumption in western Europe has been around 56 terawatt hour (TWh) per year and is expected to grow up to 104 TWh per year in 2020[64]. European commission indicates that such increase in energy consumption in data centers in EU is definitely a challenging problem for European environmental policies and there is a need to maximize the energy efficiency of data centers in order to reduce the carbon emissions and other negative impacts of such energy usage[64].

In Copenhagen Summit the OECD countries[29] agreed that Information and Communication Technology plays an important role in tackling the climate change and improving the environment. In this summit the OECD policy makers agreed to try to reduce the amount of CO2 and try to limit the impacts of climate change[65].

Considering the environmental issues, many businesses and companies are now trying to move in the direction of producing more eco-friendly products and this will be done by implementing new strategies for energy saving and greening their supply chains. Since green products are now center of attention of many consumers, businesses and producers are focusing on environmental and sustainability strategies. Moving towards cloud computing is definitely a considerable solution for companies who want to achieve their goals and improving their reputation by using green IT.

3.5.1. KYOTO PROTOCOL

Kyoto Protocol signed by the 191 member countries of United Nations (UN) to combat the global warming by reducing the green house gas emissions (GHG)[66]. Energy consumption in data centers has not been precisely indicated in the Kyoto protocol but is still counted as of one the most important protocols regarding global warming. Kyoto Protocol which is an international agreements related to the United Nations Framework Convention on Climate Change (UNFCCC) was initially proposed in 1997 in Kyoto Japan, however United States signed the protocol in 1998 but decided not to ratify the protocol[66].

According to this protocol 37 industrial contries including some European, committed themselves to reduce the amount of some major green house gasses including CO2 which is a major element in global warming by an average of 5% below of 1990 level during a time period of five years (from 2008 to 2012)[62]. The Kyoto protocol entered into force on 16 of February 2005[66].

However the developing countries like China, India and Brazil have been exempted from this protocol and they have been categorized as a non-obligated group of countries[33]. The Kyoto Protocol consists of three major mechanisms:

- Carbon trading market or cap-and-trade
- Clean developing mechanism (CDM)
- Joint implementation

According to carbon trading market, the central government will assign a limit or cap regarding the amount of GHG emission and companies are required to keep their emissions based on this limit of allowance (or credit). However it is possible for companies (in case of need) to acquire some amount of emissions from other companies which are under limit of GHG emissions[33]. This transition of limits is called the carbon trade.

3.6. CLOUD COMPUTING

By transferring into the cloud, companies will be able to reduce their required recourses including servers and data centers, buying software or paying IT professionals required for maintaining and support. Considering the costs of migrating into the cloud and issues like return on investment (ROI), not in all but in many cases, will cause significant cost and energy reduction[9]. This cost reduction is more noticeable for small to medium-sized companies.

Regarding inefficiency cases of cloud computing, a report done by university of Melbourne shows that cloud computing in some cases could not be so efficient and it could consume more energy than in-office computing. Rod Tucker explains in his research that watching a film on a single laptop could be more energy efficient compared to watching the film streamed from a data center[67].

In 2010 Pike Research published a paper (Figure 21) estimating the energy efficiency of cloud computing. According to this report, cloud computing is expected a growth of 28.8% between 2010 and 2015 years and this results significant reduce in energy consumption and GHG emissions[68].

Cloud computing provides organizations with great scalability and elasticity which means companies will be able to use the available resources regarding their own needs. This capability gives companies the opportunity to pilot their innovative projects without taking great risks and expenses. So in case the project fails company will not lose a lot of money required for investing in buying recourses for the project[69].

illustration not visible in this excerpt

Figure 21 - Data Center GHG emission, 2009 to 2020 (Source: PikesResearch - Cloud Computing Efficiency 2010)

From the other point of view the utilization problem in organizations is an important issue. In a cloud, servers have the ability to be shared or virtualized for applications or even operating systems. Cloud computing has the advantage of using virtualization which let share resources to be virtualized among different applications and this helps to reduce the amount of servers needed and hence reducing the energy consumption[4].

In 2010 Microsoft, Accenture and WSP environment and energy did a participatory research on possible impacts of using cloud computing to the environment. In this research three different Microsoft products (Microsoft Exchange, Microsoft SharePoint and Microsoft Dynamics CRM) have been tested for different organization types (Large, medium and small). The aim of this research was to see how much the environmental impacts could be reduced by migrating into the cloud regarding the organization type and size. According to this research categorizing the type of organizations was based on the number of users an organization might have. For example a company with 100 or less users will be categorized as a small organization, a company with approximately 1000 users will be recognized as a

[...]


[1] Greenhouse gas - http://en.wikipedia.org/wiki/Greenhouse_gas

[2] Proposed by Brijesh Deb, 2010- IBM developerWorks

[3] Query language for semantic web - http://www.w3.org/TR/rdf-sparql-query/

[4] http://en.wikipedia.org/wiki/Virtual_private_network

[5] Cloud Computing and Grid Computing 360-Degree Compared, Ian Foster, Yong Zhao, Ioan Raicu, Shiyong Lu 2009

[6] Twenty-One Experts Define Cloud Computing, Cloud Expo: Article, editor Jeremy Geelan, January 24, 2009

[7] Service Level Agreement, http://en.wikipedia.org/wiki/Service_level_agreement

[8] Multitenancy, http://en.wikipedia.org/wiki/Multitenancy

[9] https://cloudsecurityalliance.org/csaguide.pdf

[10] Intuit Quickbooks Online, http://quickbooksonline.intuit.com/

[11] Google App Engine, http://code.google.com/intl/de-DE/appengine/

[12] Volunteer Computing , http://en.wikipedia.org/wiki/Volunteer_computing

[13] http://www.ibm.com/developerworks/cloud/library/cl-cloudintro/index.html

[14] Google is now working on its own container data center facilities. http://www.datacenterknowledge.com/archives/2009/04/01/google-unveils-its-container-data-center/

[15] http://www.datacenterknowledge.com/archives/2008/03/27/google-data-center-faq/

[16] Carbon dioxide (From the Encyclopædia Britannica) : (CO2), a colorless gas having a faint, sharp odor and a sour taste; it is a minor component of the Earth’s atmosphere (about 3 volumes in 10,000), formed in combustion of carbon-containing materials, in fermentation, and in respiration of animals and employed by plants in the photosynthesis of carbohydrates. The presence of the gas in the atmosphere keeps some of the radiant energy received by the Earth from being returned to space, thus producing the so-called greenhouse effect. 2011

[17] Carbon footprint (Carbon dioxide emissions coefficient): measurement of the amount of greenhouse gas including CO2 produced during normal activities.

[18] The greening of IT, IBM Press ISBN: 978-0-13-715083-0

[19] A data center (or data centre or datacentre) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices. (Wikipedia English 2011)

[20] CRAC is originated from computer room air conditioning units, PDU (Power Distribution Unit), UPS (uninterruptible Power Supply)

[21] http://h18004.www1.hp.com/products/servers/management/dynamic-power-capping/index.html

[22] Different companies provide such tools. As example APC,IBM, Sun and HP are among these companies

[23] http://www.apc.com/tool/?tt=l

[24] http://www.apc.com/tool/?tt=6

[25] http://www.apc.com/tool/?tt=2

[26] The United Nations Climate Change Conference in Bali, http://unfccc.int/meetings/cop_13/items/4049.php, last viewed on 11.26.2010

[27] The United Nations Climate Change Conference in 2009, http://unfccc.int/meetings/cop_15/items/5257.php , last viewed on 11.27.2010

[28] The United Nations Climate Change Conference, Cancún, Quintana 2010, http://www.cc2010.mx/

[29] Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, Greece, Hungary, Israel, Italy, Japan, S.Korea, Luxembourg, Mexico, Norwy, Poland, Portugal, Sloval Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States, 2011 http://www.oecd.org/countrieslist/0,3025,en_33873108_33844430_1_1_1_1_1,00.html

Ende der Leseprobe aus 170 Seiten

Details

Titel
Business Process on‐Demand; Studying the Enterprise Cloud Computing and its Role in Green IT
Hochschule
Technische Universität Wien
Veranstaltung
Business Engineering and Computer Science
Note
1
Autor
Jahr
2011
Seiten
170
Katalognummer
V177372
ISBN (eBook)
9783640991624
ISBN (Buch)
9783640991839
Dateigröße
12443 KB
Sprache
Englisch
Schlagworte
business, process, studying, enterprise, cloud, computing, role, green
Arbeit zitieren
Dipl.-Ing. Seyed Amir Beheshti (Autor:in), 2011, Business Process on‐Demand; Studying the Enterprise Cloud Computing and its Role in Green IT, München, GRIN Verlag, https://www.grin.com/document/177372

Kommentare

  • Noch keine Kommentare.
Blick ins Buch
Titel: Business Process on‐Demand; Studying the Enterprise Cloud Computing and its Role in Green IT



Ihre Arbeit hochladen

Ihre Hausarbeit / Abschlussarbeit:

- Publikation als eBook und Buch
- Hohes Honorar auf die Verkäufe
- Für Sie komplett kostenlos – mit ISBN
- Es dauert nur 5 Minuten
- Jede Arbeit findet Leser

Kostenlos Autor werden