Concept for system virtualization in the field of high availability computing


Thèse de Bachelor, 2007

49 Pages


Extrait


Table of contents

Abstract

Introduction
1.1. Why virtualization?
1.2. Brief history of virtual environments & virtualization
1.3. Server virtualization techniques & terminology

2. Server virtualization fundamentals
2.1. Hypervisor / Virtual machine monitor
2.2. Hardware support for virtualization

3. Server virtualization software and techniques
3.1. VMWare GSX/ESX Server
3.2. Microsoft Virtual PC
3.3. XEN
3.3.1 Sharing drivers / hardware amongst virtual environments
3.4. Virtuozzo / OpenVZ
3.5. Linux VServer Project
3.6. Performance and feature comparison

4. Case study: Virtualization for highly available systems
4.1. Motivation
4.2. Definition and requirements of a highly available system
4.3. Case study
4.3.1 Cost comparison
4.4. XEN installation and caveats
4.4.1 Redundant Disk Arrays
4.4.2 Highly available file server
4.4.3 Alive and kicking with heartbeat
4.4.4 Deployment of the guest OS
4.4.5 The role of Virtual Machine Migration (VMM)

5. Results
5.1. Bringing the worst case together
5.2. Limitations and problems in terms of practicability
5.3. Limitations in terms of availability
5.4. Improvements to gain real high-availability

6. Summary and Outlook
6.1. Will Virtualization begin a new era in high availability computing?
6.2. Natively supported operating systems
6.3. Limitation of virtualization: Licensing aspects
6.4. Problems of virtual environments
6.5. Virtualization rootkits… a new threat?

Table of abbreviations

Bibliography

Abstract

Server virtualization is a field of IT which is currently undergoing a rapid development. Introducing an even spread of performance on server farms resulting in a good TCO (total cost of ownership), virtualization has already got the full attention from industry, resulting in massive participation and huge acquisitions. With server virtualization the size of server farms can be reduced dramatically, resulting in a lower TCO and - by using techniques like Linux-HA [LXHA] in virtual environments - increased availability. Even with an overhead of 10-20% on the layer of virtualization it is still very interesting since the load on a server farm can be spread evenly (which is not only a main target in server virtualization but also in distributed systems in general).

This work will give insight on the current developments in the field of server virtualization and the various techniques involved in it as well as a short historical overview about when the first types of virtualization were introduced (and why they failed, since according to A. Tanenbaum the current hardware is not made to be virtualized). The findings will mostly be supported by examples of current virtual environments (especially the XEN project).

The paper will first introduce the different ways of virtualizing a system and in how far hardware can or can not support this. The chapter will discuss the functions, advantages and disadvantages of the three types of virtualization: “full system emulation”, paravirtualization and native (OS) virtualization. It also addresses the different approaches of the management of virtualized servers which is mostly called “hypervisor”.

The second part introduces the current software for virtualizing a server and giving users their own separated environment, starting from early approaches like BSD jails to the Linux V-Server and XEN Project. The chapter also includes commercial solutions like Virtuozzo from VMWare, the VMWare GSX Server and Microsoft’s Virtual PC. All of these solutions have some outstanding aspects, use a different virtualization type and yet there is no predominant software package. The chapter will also cover the aspect of security on virtualized systems since (especially in para- and native virtualization) we are talking about running two or more systems (with administrative access) on the same physical machine sharing the same memory and storage. This is generally referred to as “isolation” in virtual environments (VE).

The last chapter will cover the use of server virtualization in high-availability environments. With the use of n physical machines and m virtualized systems on each machine, an environment of (n * m) / x can be created, where x is the level of redundancy (e.g. x = 2 for mirroring). Most interesting is how virtual environments can be moved from one physical machine to another one and how this can be achieved without human intervention.

After setting up a test scenario we will evaluate the findings and give a conclusion on how virtualization and high availability can be effectively combined to provide both: flexibility and availability. The summary will mostly focus on the newest developments in virtualization and which side-effects virtualization may have on the computer industry.

Introduction

1.1. Why virtualization?

In 2004 the Gartner Group announced that the utilization of modern 2- or 4-way servers ranges from 10 to 15% in the average. This underutilization of server systems is probably one of the biggest motivations for virtualization. But there are many other advantages that can not be achieved with standard servers. The underutilization of servers has also been the motivation for grid computing but grid computing is not as versatile and flexible as server virtualization is and will be.

Virtualization enables the use of applications with fully adverse system requirements on the same machine, without the need of investments into new hardware. It provides a homogenous work environment in a heterogeneous infrastructure. The available resources can be distributed perfectly amongst the virtualized servers according to their needs. Load balancing can easily be established and a remote standby system can be installed instantly.

When it comes to availability the “one application per server” strategy is definitely not adequate because from the single applications’ point of view it does not offer any redundancy or high availability. The typical strategy for five powerful services is to buy five oversized servers (in order to plan for the future increase in performance needs) and none of them are redundant. With virtualization it is possible to reduce the amount of five servers to two high performance servers. This setup is not only cheaper when it comes to the acquisition of the hardware but especially when it comes to maintenance, licensing, power supply, security, backup and air conditioning which make up five times the price of a server. If you create five virtual environments per server this gives you a classical failover setup so that even in case of a hardware defect the failover system can take up the work of the primary system. The backup of virtual environments is fairly easy as it is possible to create snapshots of the working environment without taking the system down.

The distribution of processing power of the single system is much easier and can be adapted instantly. If an application needs a great performance boost you can first check whether the other virtual environments can be reduced or otherwise start upgrading the failover server and switch over without downtime. Server virtualization software packages also offer great tools for performance analysis, optimization and monitoring. Redistributing processing power and memory can be performed without a downtime which helps the administrator to distribute processing power amongst the virtual servers. The tools also provide a way of moving a virtual environment to another physical machine so that the environments can be evenly distributed amongst several (even heterogeneous) servers. This migration can be performed with on outage time of less than one second.

But virtualization is also a perfect solution for all programmers and system engineers when it comes to testing software. It is possible to test an application on hundreds of different operating systems on a single machine. Whole networks of computers can be created and deployed on a single host and made interact so that a distributed application can be tested. In order to test an application’s behaviour in case of hardware failure it can easily be tested without the need of real hardware failures.

With virtualization it is also possible to create snapshots of running virtual environments. This is ideal for the installation of updates to a running and critical operating system. In case of failure it is easily possible to analyze what went wrong and in case of success the changes can be applied to the running OS. Hardware can be emulated which physically does not exist and its effect can be tested on the running operating system. Some operating systems may not be compatible with newer hardware. With virtualization you can hide the new features from the guest OS and emulate a hardware environment which is compatible with the OS.

Most virtualization software packages also provide a way of mass-updating their virtual environments which reduces the time for updates, system maintenance and optimization.

A look at virtualization from the security point of view shows that isolated virtual environments (or sandboxes) are great for running untrusted applications. Thus server virtualization is an important concept in building secure computing platforms. In order to understand the concept of virtualization we will first introduce the topic by giving a brief history of this field.

1.2. Brief history of virtual environments & virtualization

The term “virtualization” is not clearly defined which makes it hard to point out an exact date of the first occurrence of virtualization in the sense of “server virtualization”. In the 1960s a concept of “time sharing” was in the mouth of many programmers and system engineers especially with the introduction of the “Atlas computer”[1]. The Atlas computer allowed the use of so-called “extracode”, instructions that were added in runtime on a software-basis. The Atlas supervisor (which manages the system resources) was using a virtual machine and used another virtual machine to run user programs. In addition to that the Atlas computer added the concept of context switches.

The time sharing concept reached its peak with Multics (Multiplexed Information and Computing Service) [MULT65], a time-sharing OS from the MIT. The supervisor of Multics could be compared to a very limited hypervisor since it also manages I/O, CPU time and memory. The initial impetus for the time-sharing concept was on the one hand that many programmers could share a single server (today this seems common if we log-in into a multi-user Linux system) and that many employees (or – as it was planned - a whole city) could use one server (at that time the price for a typewriter was much lower compared to the price of a whole computer system).

Later IBM developed several virtual machine systems (mainframes), e.g. the CP-40, the CP-67 and the famous System 370 with the VM/370 OS [VM370]. The main parts of the VM/370 were the control program (CP) which is comparable to a Virtual Machine Monitor (VMMO) and the Conversational Monitor System (CMS) which would be the host OS today. The CP emulates the processor, the memory, the peripherals and the console.

The VM/370 mostly just used memory virtualization techniques and intelligent software to emulate guest operating systems. In the mid-1980s the CPU and I/O were virtualized as well as IBM introduced the LPAR (logical partition) servers. The LPAR environment is abstracted from all physical devices. With this technique it is possible to use multiple CPUs (sysplex: systems complex) and even multiple servers – consisting of multiple CPUs – (parallel sysplex) to form a single mainframe. This mainframe is then broken down into logical partitions, each running a different operating system.

By the mid-1990s those systems developed as open source as well with Sun being able to produce mainframes like the E10000 (starfire) series which uses similar techniques like LPAR but with the improvement of dynamic partitioning (known as “Dynamic System Domains”)[2].

At the beginning of the 90s the prices for servers and hardware in general dropped dramatically and mainframes were replaced by standard x86-servers which did exactly the same task but with a much better TCO. The dropping prices also affected home users since now everyone was able to afford a personal computer and the need for systems like the “time sharing” approach declined.

The focus was now more on access and performance limitation on behalf of the software. At that time multi-user operating systems developed and created techniques like “OpenBSD[3] jails” and the Linux VServer and UML project.

In 1999 VMWare introduced the VMWare workstation which became famous instantly since it was the first low-budget professional virtualization software on Windows and Linux. Programmers and system builders were now able to run any given Operating system under their environment. Server products from VMWare were developed in 2001: the GSX and ESX Server. The VMWare products on Linux were a great improvement over the existing Wine Windows emulator since Wine was always dependent on closed-source source code. Despite the fact that the software was executed directly, most Wine emulated software simply did not work.

On UNIX several new virtualization technologies developed shortly after VMWare presented their products in order to improve performance: XEN, SWSoft’s Virtuozzo and Ensim’s VPS. The all make use of a performance-oriented virtualization technology called paravirtualization.

Microsoft started its own virtualization software by taking over a virtualization software from Connectix[4] in 2003 which provided a possibility to run a Windows within a Macintosh environment. In 2005 Microsoft announced its Virtual PC for Windows and in 2006 (due to the pressure from free solutions like VMWare and XEN) Virtual PC became free software. After having introduced the various approaches to virtualization we will introduce the technologies used and the needed terminology in this field of work.

1.3. Server virtualization techniques & terminology

Virtualization can be achieved in many ways depending on the level of abstraction that needs to be achieved. For this several techniques can be used, all with their special characteristics. Generally you can distinguish between software and hardware virtualization.

Hardware virtualization is fairly simple: The existing hardware can be partitioned (e.g. hard disk, memory), which is a technique that has been used in IBM’s LPAR. In addition to that hardware can support virtualization techniques. As we will see later, x86 hardware is not made to be virtualized and this needs to be expunged by processor virtualization techniques (Intel’s Vanderpool and AMD’s Pacifica processor models) that introduce necessary elements in order to improve software virtualization (e.g. an associative TLB (Table lookaside buffer)) and elimination of performance bottlenecks of context switches.

Software virtualization can be categorized into four main groups: emulation, native virtualization, paravirtualization and operating system-level virtualization.

Emulation is the oldest approach of virtualization and is sometimes referred to as “trap-and-emulate” or binary virtualization. The virtual machine simply emulates the complete hardware allowing an unmodified operating system to be run in this environment. It may even expect a little endian processor although the host OS uses a big endian processor since all instructions are executed by the Virtual machine and no direct calls to the hardware are being performed. The result of this is that emulation is very performance expensive and reduces the performance to a fractional amount of what the host OS could perform even if a single guest OS is given all resources that the host OS offers. Examples for emulation are Bochs[5] and Qemu[6].

Native (full) virtualization is a technique which emulates a new PC with an abstraction layer (the so-called Virtual Machine Manager VMMA). There is a binary translation of guest OS code which is actually slow but with the use of special drivers in the guest OS and the limitation to the same type of CPU (in the host OS) some direct hardware access can be achieved which results in a better performance. VMWare Workstations and ESX Server are the most popular products to make use of this technique. The performance is better than just emulation but still it can not compare to the performance of the host OS stand-alone performance since the guest OS can not take advantage of the hosts’ optimized hardware.

Paravirtualization is one of the latest technologies in the field of virtualization. It offers great performance advantages over the existing virtualization techniques but as a drawback requires modifications of the guest OS. The guest OS needs to run specialized drivers in order to access the API which the virtualization environment provides. The management is done by the use of a Hypervisor / Virtual machine monitor (VMMO). The Hypervisor controls all access to memory, CPU and I/O. The advantage is that the guest OS is aware of the Hypervisor and thus can interact with it much better. The negative side effects of context switches and cache flushs can be minimized and the Hypervisor can directly communicate with the guest OS. This makes scheduling much easier and results in a far better performance. The upcoming technologies in CPU support for virtualization will greatly improve Paravirtualization as it will minimize or even supersede the need for guest OS modifications.

Operating system-level virtualization means that the virtual environment is being created within the host OS. There is no real guest OS because it is identical to the host OS since they share the same functions, libraries and applications. A specialized kernel is used to implement the isolated servers. Its task is to prevent one of the isolated servers to be able to read or write to memory or peripherals/data that it is not supposed to access. The most prominent example of this is the Linux VServer project and Virtuozzo/OpenVZ by SWSoft. An even simpler solution is to use FreeBSD jails with which a standard Linux user can be “jailed” to a certain directory or path. Although those jails offer very good performance they are very limited to the type of application in which they can be used. In most cases they do not offer the needed flexibility of an independent server system.

In addition to these virtualization techniques which refer to setting up real server virtualization there are other techniques for specialized tasks:

Virtual machines in the application layer are application-specific and can just be used in the context of the application. The java virtual machine (JVM) as well as ‘.Net’[7] from Microsoft are the most common virtual machines. They execute byte code which has been generated by their compilers. The virtual machine is being ported to many Operating systems enabling programmers to write their code once and use it anywhere.

Desktop virtualization can be used in order to access several virtualized desktops on a single server. It is also referred to as thin-client solution in which a “dumb” client just acts as a terminal for input/output and all the instructions are being processed on the server. The level of virtualization varies from simple access to a single PC (PCAnywhere, Remote desktop) in which just the location of the user is virtualized to remote access to physical high-density rack servers (IBM blade centers) which offer terminal services. The most common solutions in this field are the Terminal service from Microsoft as well as Citrix Metaframe[8].

After this introduction into the history and basic fundamentals of virtualization we will now analyze the fundamentals of server virtualization like the hypervisor and the contribution of hardware to virtualization in greater detail in chapter two. Chapter three will then lead us to an examination of available virtualization software particularly with regard to the usage in our defined scenario. An in-depth explanation and examination of the selected software will follow in chapter 4 besides the definition of high availability itself and the introduction of our case study. In chapter 5 we will then show the results of the test case and lead over to the summary and outlook in chapter 6.

2. Server virtualization fundamentals

This chapter is supposed to give an insight about the fundamental concepts of server virtualization especially in para- and native virtualization. It will also give an insight on how hardware can support virtualization and why the current x86 architecture is not made to be virtualized.

2.1. Hypervisor / Virtual machine monitor

The hypervisor (virtual machine monitor) introduces a virtualization layer which handles several tasks that are normally fulfilled by the hardware itself. Since the virtual environments share the hardware, the virtualization layer needs to control the access to the hardware in terms of even distribution and security. In order to solve this tasks the Hypervisor uses a technique called shadowing. It creates a software emulated page table and shadows the access to the I/O, DMA and network interfaces. The guest operating systems accesses these shadow structures by the Hypervisor API which is much faster than emulating all instructions and waiting for a trap or page fault.

Another main task of the Hypervisor is the context switch. Every time a guest OS needs to process instructions, the current OS state needs to be preserved (memory, I/O operations etc.) and then the new guest OS image can be loaded (from the previous preserved state). This operation is very costly due to the many cache misses after the buffer has been invalidated due to the switch (since the loaded OS is not supposed to access data of the previous OS). As we will see later an improvement to this is the associative TLB which is not (yet) implemented in hardware and needs to be emulated by the Hypervisor. With this TLB the page tables can be associated to the buffer and thus not all page tables need to be flushed.

In order to be able to use a Hypervisor (or paravirtualization in general) an optimal hardware layer is needed which can perform the API requests to the Hypervisor. This task is normally performed by specialized drivers and kernel modifications (mostly XEN-aware drivers). In Open Source software this is generally not a big problem but for closed-source OS (like Microsoft Windows) it is simply not possible without hardware support. There is a lot of ongoing process to make Windows work in a paravirtualized environment (by the use of drivers) but the performance is not as good as expected. But even without hardware support paravirtualization is much faster than full virtualization because the Hypervisor can interact with the guest OS and intelligently decide how to handle I/O, DMA and memory requests so that the costly context switches can be reduced. The guest OS always sees a consistent hardware state. The more minimalist the changes are (especially to the memory), the higher is the efficiency. This is the reason why there can still be performance improvements even without hardware supporting paravirtualization.

In addition to that the Hypervisor also keeps track of changes to the hardware (e.g. external I/O requests) and informs the guest OS.

With the use of hardware support, paravirtualization can cross a lot of limitations and even run an unmodified version of a standard OS (without the need for XEN-aware drivers). In how far hardware can facilitate or enable the use of virtualization will now be discussed in the following chapter.

2.2. Hardware support for virtualization

The first virtual machine monitors had a very simple mechanism referred to as “trap-and-emulate”. Hardware did not support this concept at all and thus everything was emulated and as soon as a new OS image was to access the hardware (a trap was caught) a full context switch had to be performed. The new trend is that the VMMAs get assisted by the hardware, especially the CPU.

Yet the first round of hardware-support did not bring any performance boosts since they demanded a rigid programming model which did not support the current software-emulated virtualization concepts. In fact intelligent software virtualization was faster than simple but hardware-accelerated virtualization.

In order to understand which functions the hardware can take over from the software virtualization layer it is important to get to know the basic criteria for virtualization. In 1974 Popek and Goldberg formulated the classical approach to a virtual machine monitor [PG74]. The three key aspects are fidelity, performance and safety. Fidelity refers to the fact that there should not be any difference between running the software on real hardware and running it in the virtualized environment. The Virtual machine monitor needs to manage the timing effects and hide anything that may let the guest OS find out about the virtualized environment. The x86 architecture in this respect is not classically virtualizable because the guest OS is able to observe that it runs deprivileged by reading the current privilege level (CPL) of the CPU. This is the reason why an interpreter is needed for binary-only operating systems like Windows. The second aspect is performance: The majority of guest OS instructions are supposed to be executed without the need of the virtual machine monitor. This is only possible if the hardware supports it because the x86 hardware does not implement the concept of dividing the memory amongst several virtual servers. Thus the VMMA needs to control access to the hardware. The last aspect is safety: The VMMA needs to manage all hardware resources and its access to it so that a virtual environment can never access data out of its bounds.

The best solution to perform classical virtualization as it was defined by Popek & Goldberg is trap-and-emulate [PG74].

The most important concepts in classical virtualization are: de-privileging, shadow structures and tracing. De-privileging means that the guest OS is executed directly but in a de-privileged context. This ensures the safety aspect and the VMMA intercepts traps as soon as the de-privileged guest OS needs direct access to the hardware. The VMMA emulates these trapping instructions against the VMMA state and thus allows a consistent set of changes.

The VMMA itself creates shadow structures from the guest operating systems’ primary structures (e.g. memory structure, processor status register etc.) and lets the guest OS access these structures as if it was working on real hardware. The VMMA itself then forwards the instructions to the real hardware (after maybe having changed the requests to make it compatible with the host OS hardware).

Tracing describes the technique how to handle access to the primary structures of the host OS. VMMQs use hardware page protection to trap access to in-memory primary structures. A page fault can have two reasons: a true page fault which is real violation against the protection policy of the page protection and a hidden page fault which results in a cache miss in the shadow page table. In the first case the VMMA simply forwards the fault to the guest OS but in the second case the VMMQ constructs the shadow page table and the fault is not visible to the guest OS (thus called ‘hidden’).

Improvements to classical virtualization can be done by adaptations to the guest OS (which relaxes the fidelity aspect of Popek&Goldberg) which gives the VMMA more information about and communication with the VEs or with hardware support, especially a hardware execution mode for running a guest OS.

3. Server virtualization software and techniques

In order to be able to choose the right virtualization software for our purpose of unmanaged high availability we will first introduce the current virtualization software available and later discuss which one serves our purpose the best and why.

3.1. VMWare GSX/ESX Server

VMWare is the leading company in the virtualization technologies. They offer robust full virtualization products for small-scale projects and home use (VMWare Workstation / VMWare Player) to server virtualization (VMWare GSX/ESX Server) as well as productivity tools for the administration of virtualized server farms (VMWare Infrastructure).

illustration not visible in this excerpt

Due to the fact that VMWare makes use of full virtualization, every operating system can be run on. The virtualization overhead is much greater than it is with OS level virtualization (approx. 20%). VMWare emulates the basic components of a computer: CPU, memory, hard disk (which is just a file, in VMWare terms “snapshot”), BIOS, USB-controller etc. and narrows down the guest OS to an optimal system environment. Due to the fact that it offers basic support for the devices, it can not make use of the newest technologies of these devices. VMWare also offers a wide range of specialized and generalized (e.g. VMWare standard VGA driver) drivers which enable the use of SCSI, USB devices as well as Streamers. Most other virtualization technologies do not provide this. VMWare was also the first to run multi-threaded which enable the use of multi-core systems.

VMWare Workstation is a product for developers, testers and home users. Within 5 minutes a whole server system can be emulated. As soon as this server is started, the guest OS needs to be installed. As soon as the virtual machine is not needed anymore the current state can be “snapshot”-ed to disk or even distributed amongst other programmers as the ideal test environment (of course keeping in mind to not to breach the licensing contract with the guest OS vendor. VMWare offers up to 9 internal switches and can emulate a complex network amongst multiple started virtual environments and thus can be used for network testing. It is possible to use port forwarding as well as NAT so that the virtual environments can access the Internet by passing through the host OS network adapter.

The hard disk access is handled in three different ways:

- non-persistent / volatile: this can be compared to a temporary disk and will be deleted as soon as the VMware session is closed

[...]


[1] System Design of a Computer for Time Sharing Applications (E. L. Glaser, MIT Cambridge, Massachusetts & J. F. Couleur and G. A. Oliver General Electric Computer Division)

[2] See: http://www.sun.com/servers/highend/e10000/

[3] OpenBSD: free unix-like Open Source BSD-based Operating System (http://www.openbsd.org/)

[4] Connectix Corporation closed finally in 2003

[5] Bochs Open Source IA-32 Emulation Project: http://bochs.sourceforge.net/

[6] QEMU Open Source Processor Emulator: http://fabrice.bellard.free.fr/qemu/

[7].NET: http://www.microsoft.com/germany/msdn/netframework/

[8] Citrix Metaframe: http://www.citrix.de

Fin de l'extrait de 49 pages

Résumé des informations

Titre
Concept for system virtualization in the field of high availability computing
Université
University of Paderborn
Auteur
Année
2007
Pages
49
N° de catalogue
V71330
ISBN (ebook)
9783638617901
ISBN (Livre)
9783638689205
Taille d'un fichier
1503 KB
Langue
anglais
Annotations
This work will give insight on the current developments in the field of server virtualization and the various techniques involved in it as well as a short historical overview about when the first types of virtualization were introduced. The findings will mostly be supported by examples of current virtual environments (especially the XEN project).
Mots clés
Concept
Citation du texte
Stephan Winter (Auteur), 2007, Concept for system virtualization in the field of high availability computing, Munich, GRIN Verlag, https://www.grin.com/document/71330

Commentaires

  • Pas encore de commentaires.
Lire l'ebook
Titre: Concept for system virtualization in the field of high availability computing



Télécharger textes

Votre devoir / mémoire:

- Publication en tant qu'eBook et livre
- Honoraires élevés sur les ventes
- Pour vous complètement gratuit - avec ISBN
- Cela dure que 5 minutes
- Chaque œuvre trouve des lecteurs

Devenir un auteur