Algorithms for Energy Efficient Load Balancing in Cloud Environments


Seminar Paper, 2013
17 Pages, Grade: 1.0

Excerpt

Content

1 Introduction

2 Load Balancing in Cloud Environments

3 Search Strategy and Review Protocol
3.1 Research Goals
3.2 Search Term
3.3 Source Databases
3.4 Selection and Refinement Process

4 Results
4.1 First Refinement
4.2 Second Refinement

5 Discussion
5.1 RQ1
5.2 RQ2
5.3 RQ3
5.4 Current Status of the Research
5.5 Load Balancing Algorithm Architecture

6 Limitations

7 Conclusion and Further Work

References

1 Introduction

Recent years and the increased awareness of climate change showed the importance of energy efficiency of modern society. Computing is getting more pervasive throughout civilization and consequently is a major factor in the increase of energy demand. Especially with the rise of cloud computing as an idea, large data centers are being built and take a higher and higher percentage of the produced electricity. Recent studies showed that about 2% of the total energy demand of the United States of America is originated from data centers [1]. These host the modern cloud computing environments.

So far the data center research has concentrated on having optimal resource utilization, fast response times, or high availability, but energy efficiency always was of second importance. Modern cloud computing architectures offer a powerful design to fulfill the performance wishes of customers. Since the functional requirements are fulfilled, the data center owner starts to focus on cost reduction. Depending on the source, the costs for energy are between 20% and over 50% of the overall costs. Thus, even a slight decrease in energy consumption has a significant impact on the profitability. Data centers have multiple energy consumers. Minor appliances are lighting or providing energy backup systems. The second largest user is the cooling of the data center [2]. However, the major energy needs are the servers with around 50% of total consumption [2]. Combining that figure with the earlier one, stating that half of the operational costs are energy costs, makes the electricity consumption of the servers responsible for a quarter of all running costs. Additionally, reducing energy consumption of the servers can lead to lower cooling costs. There are multiple approaches to reduce energy consumption in a data center. In order to have a useful result, this search is limited to a specific technique: load balancing.

In order to get a full overview on the topic of energy efficient load balancing algorithms, a structured literature review is conducted. Section 2 introduces how load balancing algorithms can help reducing the energy consumption of a data center. Section 3 describes the process and the different criteria for the search. Section 4 presents the results. Section 5 interprets the results from the previous section. Section 6 summarizes the work in this paper and gives an outlook for the future of optimizing load balancing algorithms towards energy efficiency.

2 Load Balancing in Cloud Environments

Current cloud environments work on virtualized machines (VMs). This requires each heavy-duty physical server to power multiple VMs at the same time. The VMs can have different configurations and service level agreements (SLAs) depending on the customer. Mainly, three abilities make it possible to save energy:

overallocation,

live migration,

shutting down servers, depending on the overall data center load.

Overallocation is the transfer of the overbooking principle, commonly practiced in the hotel and airline industry, to the IT industry [3]. In the data center context it means that the placed VMs on the physical server have more resources reserved than the server actually has available. For example, the server has 24 GB of RAM and three VMs with 12 GB of RAM configuration are currently running on this machine. This is only possible as long as the VMs do not need their full RAM capacity, since the server only has 24 GB and not the required 36 GB. However, the load for one or multiple VMs could change at any time and increase the RAM demand. The data center owner wants to avoid a potentially costly SLA violation. Thus, one of the VMs should run on a different server.

Live migration is a technique to move an active VM from one server to another without an interruption in the availability or the ongoing computing. Nevertheless, live migration is expensive. During the time beginning with the transfer until the end of the transfer, both physical machines reserve the full VM capacity. Additionally, it causes traffic on the data center internal network, which is often the scarcest entity, and the data center storage area network as well.

The situation in a cloud environment is that of hundreds of physical machines powering thousands of VMs with different configurations and SLAs, and quickly changing loads. Load balancing algorithms place and live migrate VMs on the controlled amount of running servers. This field has been researched with different optimization goals (up time, response time, minimizing internal traffic). As shown in the introduction, the optimization towards energy efficiency has a direct impact on the costs of the data center, which makes this goal attractive.

3 Search Strategy and Review Protocol

A literature review is supposed to be reproducible. Thus, the following subchapters describe different aspects of the paper search, selection, and refinement.

3.1 Research Goals

RQ1: What techniques are used by load balancing algorithms to increase energy efficiency?

Load balancing is not a new problem. A very similar field was researched during the time of distributed computing in the 1950’s. Exactly as in the past, there are different ideas to balance the load in a cloud environment. This research question aims to grasp the basic concept behind the algorithm. An overview of the different techniques gives a good opportunity to understand what popular approaches are, where the research is going, and what could come in the future.

RQ2: How are the energy efficiency improvements measured?

The researcher who has an idea to improve energy efficiency needs to prove his advancement in a way. This could be, for example, an experiment within his data center or a simulation with real load data. However, if he proves his advancement with an experiment within his own data center, the idea is valid for one particular case. That does not necessarily mean that it will work as well for any other data center with largely different setups. In other words, thinking about how the researcher showed his improvement, gives a hint about the generality of his solution. A slightly higher efficiency in all cases is preferable to a largely higher efficiency in one case which rarely occurs. Also, looking into this question shows the limitations of the algorithms, i.e. if it is only applicable for low-load situations. These points make this an important research question.

RQ3: Does applying the algorithm in order to reduce energy consumption affect overall performance or system reliability?

The concept of load balancing seems to be an advantage without a drawback. The algorithms can reduce energy consumption with simultaneously no performance drops. However, real world experiments indicate that it does have a drawback and it does affect the system. This question is additional to the actual literature research and does not limit the search for different algorithms. Nevertheless, in order to get an idea out of the research lab into a running data center, the research must consider downsides too. A data center owner who saves money on energy but i.e. loses three times as much in revenue will not think the algorithm is effective. This questions aims to clarify if and how many of the current researchers pay attention to this matter.

[...]


[1] Jonathan Koomey, Growth in Data center electricity use 2005 to 2010.

[2] Emerson Network Power, Five Strategies for Cutting Data Center Energy Costs Through Enhanced Cooling Efficiency.

[3] A. Sulistio, K. H. Kim, and R. Buyya, “Managing Cancellations and No-Shows of Reservations with Overbooking to Increase Resource Revenue,” in Proceedings of the 2008 Eighth IEEE International Symposium on Cluster Computing and the Grid: IEEE Computer Society, 2008, pp. 267–276.

Excerpt out of 17 pages

Details

Title
Algorithms for Energy Efficient Load Balancing in Cloud Environments
College
Otto-von-Guericke-University Magdeburg  (Faculty of Computer Science)
Course
Recent Topics in Business Informatics
Grade
1.0
Author
Year
2013
Pages
17
Catalog Number
V286584
ISBN (eBook)
9783656868705
ISBN (Book)
9783656868712
File size
606 KB
Language
English
Tags
Load Balancing, Cloud Computing, Data Center
Quote paper
Norman Peitek (Author), 2013, Algorithms for Energy Efficient Load Balancing in Cloud Environments, Munich, GRIN Verlag, https://www.grin.com/document/286584

Comments

  • No comments yet.
Read the ebook
Title: Algorithms for Energy Efficient Load Balancing in Cloud Environments


Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free