Program scheduling and simulation in an operating system environment


Forschungsarbeit, 2011

97 Seiten, Note: A


Leseprobe


Table of Contents

Abstract

List of Abbreviations

List of Tables

List of Appendices

Table of Contents

Chapter 1 : Introduction
1.1 Background
1.2 Aim
1.3 Objective
1.4 Statement of Problem
1.5 Purpose of the Dissertation
1.6 Significant of the Dissertation
1.7 Research Question
1.8 Summary
1.9 Dissertation Outline

Chapter 2 : Literature Review and Survey
2.1 Overview
2.2 Summary

Chapter 3 : Research Methodology
3.1 Introduction
3.2 Quantitative research
3.3 Objective of Research
3.4 Research Analysis
3.5 Research Questions
3.6 Research Design
3.6.1 Research Approach
3.6.2 Sampling Plan
3.6.2.1 Simple Random Sampling
3.6.3 Simulation
3.6.3.1 The Simulation Design: Problem Definition
3.6.3.2 Identifying Variables Related to the Decisions, Performance and the Decision Rules
3.6.3.3 Construction of a Simulation Model
3.6.3.4 Relating Random numbers to the variability in the simulation
3.6.3.5 Validation of the model
3.6.3.6 Appropriate experimental design
3.6.3.7 Run the simulation
3.6.3.8 Examination of the data and making conclusions
3.6.3.9 Simulation advantages
3.6.3.10 Limitations of Simulation
3.7 Design Questionnaire
3.8 Data Collection
3.9 Secondary Data Collection

Chapter 4 : Findings and Analysis
4.1 Data Presentation
4.1.1 Illustration
4.1.2 Illustration of the simulation program functions
4.1.3 Simulation Analysis
4.1.3.1 Identifying the problem
4.1.3.2 Identifying variables related to the decisions, performance and the decision rules
4.1.3.3 Constructing a simulation model
4.1.3.4 Validate the model
4.1.3.5 Specify the values of decision variables to be tested
4.1.3.6 Run the simulation
4.1.3.7 Examine the results, make a conclusion and take the best course of action

Chapter 5 : Conclusions
5.1 Summary
5.2 Conclusions
5.3 Discussion
5.4 Recommendations

Bibliography/References

Appendices

Appendix I: Data Dictionary
Appendix II: Questionnaire
Appendix III: Entity Relationship Diagram
Appendix IV: Data Flow Diagram
Appendix V: Structured Chart
Appendix VI: Flow Chart
Appendix VII: Source code
Appendix VIII: Interface
Appendix IX: Gantt Chart
Appendix X: Turnitin Report

Chapter 1 : Introduction

1.1 Background

CPU scheduling is a technique used by computer operating systems to manage the usage of the computer’s central processing unit. In a multi-programming environment whereby several processes are running on the same processor, it is essential to use scheduling criteria to avoid collisions in the computer’s operations. This will help users in a given information technology oriented firm to share server spaces and resources like printers and file storage spaces. In the multi-tasking environment, a program called CPU scheduler selects one of the ready processes and allocates the processor to it. There are a number of occasions when a new process can or must be chosen to run: When a running process block and changes its state to ‘Blocked’, When a timer for a running process expires, When a waiting process unblocks and changes its state to ‘Ready’, and When a running process terminates and changes its state to ‘Exit’ (Wikipedia, 2013).

Different types of scheduling programs referred to as algorithms can be employed in CPU scheduling instances. Among the most popular scheduling algorithms is Shortest Job First (SJF). SJF gives the processor to the process with the shortest next time allocation known as the burst. If there are processes with similar CPU bursts in the event queue, the scheduler uses First Come First Served algorithm which allocates the first process to arrive in the queue to the processor regardless of its burst time. It operates under the assumption that the length of the next CPU burst of each of the processes in ready queue is known (CPU scheduling, 2013). The SJF algorithm can be used in both pre-emptive and non-preemptive methods. The algorithm can be preemptive or not.

Shortest Job First with preemption uses priority measure to determine the next process to be given the CPU. The processes will be having different CPU bursts and different priority levels allocated to them. The process with the least priority magnitude is always picked next. A process already allocated the processor can be preempted the CPU and allocation done to another process with higher priority when such a process arrives in the queue. SJF with non-preemptive operates in the normal procedure whereby the job with the least CPU burst in the waiting queue is always picked next for allocation of the CPU and the rest of the processes have to wait no matter their urgency.

Based on the introduction above, it is essential to use the right CPU scheduling strategy to help us achieve better average waiting time and turnaround time. SJF simulation was proposed using the preemptive and non-preemptive method.

1.2 Aim

The aim of this dissertation is to simulate a system in order to achieve a better insight of CPU scheduling and its impact on the performance and efficiency of the proposed system by showcasing how CPU scheduling particularly Shortest Job First algorithm can be used to facilitate sharing of resources in a computing environment and fair allocation of processor to different processes and considering the priority of jobs when allocating the CPU in a multi-tasking scenario (Abraham, 2008).

1.3 Objective

The simulation system intended to show how the following metrics will determine scheduling efficiency:

- CPU utilization: the fraction of the time the processor is being used and not for idle process. This will ensure that the CPU is as busy as possible.
- Throughput: the number of processes that complete execution per unit time
- Turn Around Time: amount of time taken to execute a given process
- Waiting time: the amount of time taken by a waiting process in the ready queue
- Response time: the amount of time it takes from when a request was submitted until the first response is produced especially in time-sharing systems
- Multitasking: able to do multitasking processes at on time

The scheduling strategies will be to maximize throughput or CPU utilization and to minimize the average turnaround time, waiting time or response time. It will also ensure fairness in CPU scheduling (Maria, 2012).

1.4 Statement of Problem

Given the rising uptake of technology specifically utilization of computers, the speed and efficiency of the central processing unit are a major factor of consideration in the adoption of any technology. Several techniques are used in sharing the processor among those are the scheduling algorithms like shortest job first, first come first served, round robin and many others. One of the most popular algorithms is the shortest job first which can be with or without preemption. It is necessary to evaluate the performance of the central processing unit under the use of Shortest Job First scheduling algorithm either with preemption or with no preemption hence the need for development of the simulation program as proposed in the project proposal. In a multi-programming environment whereby several processes are running on the same processor or either in a multiprocessor, it is essential to use scheduling criteria to avoid collisions in the computer’s operations. The whole process will be done by the operating system. Given the rising of social networking, the focus now is more crucial for a proper scheduling in place to actually manage the whole processes to avoid unwanted traffics and collisions in networking and operating system. Developer tend to ignore the basic fundamental of scheduling when developing a system. Once the system is in place, the whole operation or processes slow down due and a re-engineering process take place to track down which part of the development is having problem. The whole process will have a significant impact if there is no proper knowledge of how CPU method works.

1.5 Purpose of the Dissertation

The purpose of this study will be to explore the performance of the central processing unit of a computer under different processor scheduling algorithms. The performance will be observed for Shortest Job First algorithm with and without preemption. Parameters used will be average waiting time for a given number of processes under each scheduling criteria. This study will show a better insight of how CPU works in a simulation environment. With all the algorithm method which is written in a C programming language, it will help developers either in networking or system, even in latest mobile operating system such as IOS, Andriod, etc. to properly manage and make use of the CPU.

1.6 Significant of the Dissertation

It is very important for to understand how the central processing unit of a computer works. By having an insight of the operation of a processor, we will be able to choose and implement the best systems in our technologies (networking or system). It is crucial to understand the impact of CPU scheduling to the performance of the machine. This dissertation will give an indication of the waiting time for processes under a multitasking computing environment using shortest job first scheduling algorithm and hence helping us to choose between preemptive and non-preemptive algorithms after the simulation program.

1.7 Research Question

The study will answer the question of the impact of CPU scheduling criteria of performance of a computing system. It will also give an insight of how the process works and how a simulation program can be used to determine which between shortest job first algorithm with preemption and with no preemption is the best for optimized CPU utilization, reduced waiting time for processes, high throughput, fast response time and short turnaround time for processes being executed by the processor. With cloud computing and visualization become more common, the scheduling method becomes important .

1.8 Summary

The paper will critically examine the concepts of CPU scheduling using Shortest Job First algorithm with and without preemption. By the end of the simulation, the user will be able to determine which scheduling scheme to use for best performance of the proposed system. This simulation is to show which method works well to be adapted to any system, networks and even cloud computing and virtualization. The future of technology is going to be complex, but by having a proper foundation of scheduling method, any type developer will not have issue on their end product.

1.9 Dissertation Outline

The paper has been designed to follow the following structure of chapters:

Chapter 1: Introduction

This chapter consists of the introduction to the dissertation assisted by background, aim and objectives of the research.

Chapter 2: Literature Review and Survey

In this chapter, the focus is on a discussion of previous research studies related to CPU scheduling in different methods and its effect on processes. This chapter will focus on SJF two methods (preemptive and non-preemptive). It will also introduce the operating systems – CPU scheduling, brief history of the operating system and CPU scheduling concepts.

Chapter 3: Research Methodology

This chapter is an elaboration of the research methodology adopted for the research study and the aspects of the research that the research methodology will cover.

Chapter 4: Finding and Analysis

This chapter is a presentation of the findings of the adopted research methodology and literature review.

Chapter 5: Conclusion

The conclusions and recommendations are drawn from the critical literature review and presented in light of the theoretical literature review. They outline the outcome of the critical analysis tool used in the current research.

Chapter 2 : Literature Review and Survey

2.1 Overview

This dissertation will explore the activities of a computer operating system in processor allocation for any given multiprogramming computing environment. The general topic to be covered is CPU scheduling using a simulation program based on Shortest Job First (SJF) algorithm with and without pre-emption as the scheduling schemes to be examined. This review will be based on the above topic. In the recent past, the operating systems field has been in constant evolution moving from command based operating systems to user friendly ones currently in use. One factor that is evident in all the operating system versions is the processor allocation methodology whereby earlier versions used serial methods like first come first served with no consideration to size hand priority of the processes in execution.

However, the specific scheduling criteria applied by the operating system have not been a well elaborated concept in the past. Users of different systems are currently unable to note which scheduling criteria their systems are using to share the processor among different processes. Enough research and illustration has not been done on CPU scheduling so as to note the actual impact of the choice of scheduling criteria on the performance of different systems. There has been a gap between the processor speed and the actual execution speed for jobs being fed by the end user. This review will focus on answering some questions: 1) what we know about the area of study i.e. CPU is scheduled; the relationship among key concepts in processor scheduling i.e. CPU utilization, waiting time, turnaround time, throughput, and response time. 2) What are the existing studies and theories about CPU scheduling? 3) What the shortcomings and inconsistencies are in past researches. 4) What may need further testing or evidence in the study? 5) The methods or designs which might be faulty. 6) What the reasons for further study on the question are; and the contribution of this review is likely to make on future efforts of study in this area.

A research by (Maria, 2012) shows that in any given computer system, different processor allocation methods are applied. Central processor scheduling is one of the basic functions for any operating system. In a computer network emanating from several workstations and file servers where by users share resources like printers, the CPU scheduling is very necessary. Multitasking has been embraced in the current operating systems so as to enhance efficiency of computer systems and this relies much on the scheduling algorithms used for processor scheduling. Different scheduling algorithms are normally used and among them is the Shortest Job First. In the Shortest Job First with no pre-emption scheme, a waiting process with the shortest execution time is always allocated the CPU first. This means that whenever the processor is free for allocation, it is given to the job with the least next central processing unit execution time. Batch jobs whose execution times are predetermined are best handled under this scheduling scheme. The Shortest Job First algorithm is optimal since it provides the least average waiting time given any batch of jobs in execution. Its major limitation is the scheme favours short jobs at the expense of long ones.

The key concepts in CPU scheduling are as follows: CPU utilization which is the portion of time the processor is in use, this ensures that the CPU is as busy as possible. The processor throughput which refers to the number of jobs that finish execution per unit time. Turnaround Time which is the time taken to finish execution of any given job. Process waiting time which refers to the amount of time taken by a waiting process in the waiting queue of processes. Response time which is the time it takes from the time a request was submitted until the initial response is produced especially in time sharing systems. The factors are related to the fact that for a computer system with high throughput, processor utilization is also high. When the turnaround time for processes is short, this increases response time and also lowers the waiting time since the jobs do not take a lot of time in the waiting queue.

The existing studies on processor scheduling have identified the following CPU scheduling algorithms used by operating systems; Priority scheduling algorithm, Shortest Job First algorithm, Round Robin, Shortest Remaining time algorithm, First Come First Served algorithm, and Multilevel Feedback Queue algorithm. The Shortest Job First algorithm which is the most optimal in terms of efficiency has two schemes: pre-emptive whereby the processor can be taken from one process once a more deserving process enters the waiting queue; and non preemptive in which when a process is allocated the processor, the processor cannot be gotten from it until it finishes execution.

The major shortcoming with the Shortest Job First algorithm which is the subject of discussion is that it is normally considered as non-pre-emptive in many studies. Many people are not aware that the algorithm can be used under the pre-emptive scheme. This implies that shorter and urgent jobs will have to wait for an unnecessarily long time in order for huge jobs which arrive first to finish execution. This creates an inconsistency given that the dissertation will focus on Shortest Job First algorithm with and with no pre-emption as the two schemes to be used in the proposed simulation program. There is also a limitation on the actual impact of processor scheduling techniques on computer systems’ performance in all the previous studies about operating systems. A study by (Abraham, 2008) indicates that CPU scheduling using Shortest Job First algorithm can be used to facilitate sharing of resources in a computing environment and fair allocation of processor to different processes and considering the priority of jobs when allocating the CPU in a multi-tasking scenario; an assertion that is not conclusive since adequate testing needs to be carried out.

It has been indicated in the project proposal that the major project tasks will be requirement gathering and analysis, system design, development of the simulation program, testing and implementation. There is a need to specify on the appropriate method of testing to be used; which can be unit testing, system testing or pilot testing. There is a need for more effective methods of requirements gathering to be incorporated in the research; some techniques like requirements elicitation need to be used. In all the previous texts done about operating system, there were no specific implications of waiting time and turnaround time for processes on the computer systems’ performance. There is also a need for more evidence and testing of the actual effect of waiting time on performance since enough consideration is not made on the priority of different processes in the waiting queue.

In a study done by Milenkovic, M. (1987) on operating systems concepts and design, some of the highlighted CPU scheduling strategies like priority scheduling are faulty. Specifically, priority scheduling as a scheduling technique involves the use of urgency measures allocated to the processes. There is little known about the mechanisms used to determine the priority of individual processes hence making the algorithm inefficiently. This is unlike the Shortest Job First algorithm which considers burst length as the measure of priority for each process in the ready queue and hence making it possible to apply pre-emption.

There are several reasons why I decided to do further research on CPU scheduling strategies specifically the Shortest Job First algorithm using both pre-emptive and non-pre-emptive schemes. One of the reasons is that information given in existing studies does not guarantee the actual impact of selection of processor scheduling algorithm. None of the text I reviewed gives the exact role played by CPU scheduling. For instance, the lecture notes only give a theoretical explanation of the topic without any suggestion of practical applications. (Maria, 2012) gives more focus to the Round Robin strategy which is less practical in real computational applications. Give the above findings; my research will be anchored on proving that Shortest Job First algorithm is the optimal and most practical CPU scheduling strategy. I will also give practical application of the mechanism based on pre-emptive and non-pre-emptive schemes using a simulation program written in C programming language. Another reason is the technology development,. Today we can enjoy the technology developed by software developer around the globe, ranging from Microsoft to a small IT entrepreneur. Their aims are to provide technology to users like us and to simplify our daily life. Scheduling is part of their development process. We adapt the method and implement it to an operating system, networking or a software. As you can see now, social networking such as Whatapps, Facebook, etc., millions of users are connected. Would it be possible for those systems works in a batch processing CPU?, or it would be better to have a proper scheduling method so that once the system is in place and connected to million of users around the worlds, there will be less network traffics or a system issue.

A publication by Dhotre, I. (2008) explores the operating systems concept of CPU scheduling in chapter four. The book explains the use of a dispatcher to schedule jobs on the processor but nothing much is said about the operational efficiency of the dispatcher. Dhotre describes several scheduling algorithms including priority scheduling, multilevel feedback queue scheduling, shortest job first scheduling, first come first served algorithm, round-robin scheduling, multi level queue scheduling, and shortest remaining time scheduling algorithm. The text also gives a comparison between first come first served and round robin algorithms without distinct proposals on how to improve processor sharing efficiency in a multiprocessing environment. My research will be aimed at demystifying the unexplained pre-emption and non-pre-emption concepts which are not well covered in this text so as to help improve processor sharing efficiency and performance of computerized systems. This text gives no proposal for practical applications of the scheduling strategies studied. It indicates that the scheduling is aimed at improving central processing unit utilization but there is no evidence of how the different scheduling policies can lead to different levels of CPU utilization. I will however use the theoretical descriptions used in this book as a basis of my research so as to study further the implications of scheduling to the performance of computer systems and better user experience with operating systems.

Another text by Ramesh, S. (2010) explores the principles of operating systems. It involves an overview of the computer system including computer architecture, input/output communication techniques, memory hierarchy, and execution of instructions. The text also covers operating systems in terms of the evolution of operating systems over time, real-time operating systems concepts, memory management, virtual machines, system design and implementation, types of operating systems, functions performed or services offered by the operating system, system components and system calls, and the need as well as the role of operating systems concepts in computerized systems’ performance. Process management is also described. Process states are given as created, ready/waiting, running, blocked, and terminated. Operations on processes are also covered in terms of multiprocessing and the concept of cooperating processes. The use of the process control block which is a parameter used to moderate the flow of jobs in and out of the processor is also described. The process control block contains information to do with the process such as Run state and scheduling information, Memory management information, Hardware runs state information, Process is signalling information, Access control information, and Input/output information. Context switching is described as follows; in a multiprogramming environment, in order to ensure fair sharing of the processor to each job, interrupts are periodically generated by the hardware clock located in the CPU. This enables the operating system to allocate the processor to all the processes located in main memory by scheduling their requests and hence allowing the jobs to execute in the central processing unit at similar intervals of time. Any given time an interrupt happens on the CPU clock, interrupt handler program checks the amount of time the process under execution has consumed. If the process has exhausted its allocated time slice, the processor scheduling policy schedules a different job. Each change of the processor from one job to another is referred to as a context switch. The application of the context switch concept in real time systems is not well illustrated hence the need for further research on how the concept can be applied in processor scheduling to improve efficiency and performance of computerized systems. The text also explains the concept of inter-process communication whereby processes are able to learn of each other’s state and size but this is not numerically illustrated hence there is little or no evidence of the practical application of the concept. This calls for further research on the principles of operating systems.

Next is a review of the text Agrawala, A., Hwang, P. and Gordo, K. (2013). Mission Critical Operating Systems. Amsterdam: IOS press. The authors explore the intensive requirements and characteristics of mission critical systems. Some of the characteristics include the complexity of the system, the size of the system in which most of the systems are huge, software intensiveness, correctness, information overload handling, and secure operations. For the above features to be achieved effectively there is the need for using the best processor scheduling strategy.

I also reviewed the publication Joshi, D. (2005). Operating Systems. New Delhi: Dreamtech Press. This text was intended to give a clear understanding of both practical and theoretical operating systems concepts. The book provides explanations about the fundamental concepts of operating systems and how the same can be applied in computer systems design. It also focuses on the developments in the operating systems field. Some of the basic concepts covered include memory management, semaphores, paging algorithm, deadlocks, monitors, inter process communication, file system design issues, protection and security mechanisms, device drivers, and central processing unit scheduling. The book also discusses several case studies on UNIX, LINUX, and Windows operating systems so as to illustrate the industrial application and implementation of resource sharing and management strategies; the end users can also be able to understand the text better. It also gives application of the concepts in real operating systems. Little is discussed about the best scheduling strategies and their role in improving processor scheduling efficiency and subsequent performance of computer systems. This guarantees for further research on operating systems concepts.

In his fourth edition publication, Ritchie, C. (2003) examines operating systems with a bias on specific operating systems using UNIX and Windows as illustrations. Ritchie gives the basic concepts in practical operating systems. The concepts include multiprogramming, spooling, real-time systems, single stream batch processing, the era of punched cards, early printers and terminals, and new ideas in the field. It also gives an introduction to UNIX and Windows operating systems in terms of their features. It also explains the process concept in relation to process creation and states as well as operation of computer systems in kernel mode. Hardware features which impact on computer system performance are given; these include interrupt system, Direct Memory Access (DMA) technique, general machine structure, and memory addressability. The knowledge of the hardware feature will be crucial in my research on how to improve processor performance. The book most importantly covers process management as an independent section. Basic concepts like process creation, process threads and process state diagrams are described. In its simplest form, scheduling may consist of four states: Blocked- a job waiting so that some event can occur before continuing execution. This event could be I/O operation. Blocked do not require the CPU services since execution cannot proceed until the blocking event completes. Ready- a job that is not yet given the processor but it is ready to start execution. Running- the job that is executing in the CPU. Terminated- a process whose execution has stopped but a trail of its process is still being kept by the operating system. This type is referred to as a zombie process. Following are some reasons given in the text for the creation of a job; User starts a program, Operating systems creates process to provide service, User logs on to a system, a given program starts another job, or Parent process create child processes, which, in turn create other processes, forming a tree of processes. It also explains the objectives of scheduling which include; maximizing the system throughput, being fair to all users’ relatives to the work being carried out, and providing tolerable responsive, Degrade performance gracefully as well as being consistent and predictable. The book categorizes scheduling into three including medium term scheduling, long term scheduling, and short term scheduling. The high level scheduler is to provide the medium term scheduler with appropriate number of jobs so that the CPU does not sit idle waiting for jobs that are blocked and it controls the admission of jobs in the system. The medium level scheduler swaps processes in and out of memory. The low level scheduler allocates the processor to a process that is already in the primary memory and ready to run. Each process must release the processor after exhausting its time burst and it gets back to the collection of resources from which the scheduler picks the process that is due for execution next. The text examines the above concepts under specific operating systems including windows and UNIX. However, the text does not focus on the specific scheduling algorithms, but it has generalized scheduling only as a theoretical concept in operating systems hence the need to improve on this research.

Another important text was done by Dhamdhere, D. (2006). The text gives a general overview of the operating systems with the main features being user convenience, performance of the system as well as efficiency. Modern operating systems with aspects of multiprogramming and distributed computing will be a major contribution to my research from this text. This book gives precise distinction between processes and threads of execution. It also illustrates the programmer's view of processes. In the preliminaries of scheduling the author asserts that a scheduling policy used in any given operating system is likely to influence user service, system performance and efficient use of resources. The book defines the scheduling as the activity of selecting the next request to be serviced by a server and hence a scheduling policy determines the quality of service provided to the system users. The early type OS scheduling was non-preemptive where a process retained control of the CPU until a process blocked or terminated but modern OS scheduling is preemptive where a process can be blocked or terminated by the scheduler to allocate the CPU to another process. There are six scheduling algorithms that have been widely used in history and these include Shortest job first algorithm, Shortest remaining time, Round robin, Priority, First come first served algorithm, and Multilevel Feedback Queue algorithm. The book explains the shortest job first algorithm as whereby the operating system picks the job that has the least anticipated time for processing. In case of a tie First Come First Served can be used. SJF favours short jobs over long ones and can lead to starvation of long jobs. To avoid starvation as a limitation some schemes of shortest job first algorithm have been developed. The main scheduling schemes of interest to my research from this text are preemptive and non-preemptive and their unique explanations as given by the author. In non-preemptive scheduling, a server always processes a scheduled request to its completion and preemption never occurs. The scheme is attractive owing to its simplicity since it is not necessary to maintain a distinction between a partially serviced and a serviced request from processes. Three scheduling policies discussed under these schemes are first come first served scheduling, shortest request next scheduling, and highest response ratio next scheduling. In preemptive scheduling according to the text, the server can be switched to the processing of a new request before finishing execution of the current request and the preempted request is pushed back to the waiting queue. The preempted request has to be scheduled afresh and hence a single request may have to be scheduled several times before it leaves the system. This scheme is normally used in time sharing and multiprogramming operating systems. The text discusses four scheduling policies under the preemptive schemes. These include; Round-robin scheduling with time slicing, least completed next scheduling, shortest time to go scheduling, and highest response –ratio next scheduling. The book gives precise diagrammatic and graphical illustrations of how the two different schemes work. The text also provides a description for scheduling in practice under which it gives theoretical explanations of Long, medium and short term scheduling. It is asserted that an operating system has to give a suitable combination of system-centric and user-centric features. The computer manager also has to adapt to the number and nature of user requests that are anticipated to arise in the operating system’s environment as well as availability of the required resources. This means that a single scheduler and a single scheduling policy cannot address all the above concerns hence the need for development of several scheduling algorithms. Priority-based scheduling is also discussed; it is said to offer good response times to high priority processes and good throughput. In Priority scheduling each job gives a value for priority and the process to be executed is the one with the highest priority. In case of a tie First Come First Served is used. This type of scheduling can operate with or with no preemption. Other types of systems processes with the lowest priority values are afforded the highest selection priority. However, there is no evidence in this text of how the priority values are obtained since no measurement criteria has been given in this research; this makes a priority scheduling of little contribution to my research, though I will employ concepts related to it but using the CPU burst length as my measure of urgency for any given process request for the processor allocation. My study will be of importance to future researches on how to improve performance of processors since I will prove that there is a distinct difference in results when either preemptive or non-preemptive scheme of shortest job first algorithm is used. However, this book does not compare the two schemes under one scheduling algorithm but it only gives generalized illustrations. The graphical representations given will be of importance to my research on how the central processing unit performance is measured and its effect of general user experience of any computer system.

A study by Jaeger, T. (2008), focuses on operating system security. It gives the goals of operating system security as ability to protect the three main security features of systems and the stored data, these include: availability which ensures that the system is always available for the users; Privacy which ensures that the data stored is available only to authorized users and it remains a secret to the others; and Integrity which ensures that unauthorized modifications are not made to the stored data. All these help to protect the data stored on any given computer system. Some techniques used to enforce the above are identification and authentication of system users, using logs in user ids where the users can only access the operating system by use of a user name and a password. The operating system security is crucial and it cannot be achieved without proper scheduling since collisions in any given system can lead to unavailability of the computer system, hence violating the availability requirement. The book also covers access control techniques. It also evaluates the security measures in UNIX and Windows operating system where by focus is given to the specific protection system, authorization, security analysis, and security vulnerabilities. The text will not be of much use in my research though it will help me to keep in mind of the security requirements of the operating system. This study needs to be improved so as to cover the implications of scheduling strategies to the operating systems security and also elaborate on how ineffective security mechanisms can lead to poor performance of a computerized system.

A research by Hand, S. (2009), also made a crucial contribution to my research. Hand gives illustrated study in the form of a handout for Cambridge university computer science students. My focus was on the CPU scheduling topic where by various metrics of scheduling criteria are used and these include processor utilization, waiting time, the throughput of a system, turnaround time as well as response time. The author asserts that sensible scheduling strategies should be able to increase processor utilization or system throughput, and reduce waiting duration, response duration or average turnaround time. The concepts of liveliness and fairness also need to be considered. The text illustrates the different policies for processor scheduling including shortest job first, first come first served, and round robin among others. For each algorithm, the author gives a clear graphic illustration using Gantt charts. From the analysis, the author asserts that Shortest Job First is the best since it provides the least average waiting duration for any collection of processes entering the waiting queue. This study will be of major contributions in my research since I also used Gantt charts to illustrate the effect of each scheduling scheme of SJF on average waiting time and turnaround time for given set of processes. The study also gives a preemptive version of shortest job first and it identifies it as shortest remaining time first (SRTF). Further research is essential to assert the exact effect of using either preemptive or non-preemptive shortest job first algorithm in a scheduling strategy.

This review will make a significant contribution to future texts about operating systems. Given that I will give practical proof of how a similar number of processes with given burst lengths can have different average waiting times under either pre-emptive or non-pre-emptive SJF algorithm, this will have a role in determining the performance of the proposed system.

In conclusion, many existing researches on operating systems have contributed immensely to the latest developments in CPU scheduling. All of the reviewed literature about processor scheduling strategy shows that there is a relationship between the CPU scheduling strategy applied in a given computing system and the system’s performance. It can also be clearly seen that the existing studies on processor scheduling techniques only focus on the theoretical material and not on the industrial use of scheduling. There is prevalent inconsistency regarding to which scheduling algorithms can be applied as both pre-emptive and non-pre-emptive. Finally, it can be concluded that the concept of CPU scheduling helps to avoid collisions and ensures efficient central processing unit utilization hence it is a crucial concept in operating systems topic and the wider computer science discipline.

2.2 Summary

The literature review focussed on the general knowledge that exists about central processing unit scheduling techniques. The review also identified several shortcomings in the existing studies and the gap that exists between theoretical and practical aspects of CPU scheduling. It also gives a suggestion of the role played by the project research on improving the field of operating systems so as to enhance high performance of computerized systems in the current world whereby uptake of technology is so rapid. Technology growth day by day, and the increase of capacity and data will affect the overall quality. An optimal scheduling method can bring in some relief to the operations. The survey covered different texts about operating systems with a bias to scheduling concepts including scheduling algorithms and theoretical knowledge on CPU scheduling. I also reviewed internet sources with general information about processor scheduling, cloud computing and Virtualization. By the end of my literature survey, I had adequate data on existing literatures about operating systems and I have a reason for studying the field further with a bias to the relationship between the processor scheduling and performance of the modern operating systems and computer based systems.

Chapter 3 : Research Methodology

3.1 Introduction

CPU scheduling is the act of making a selection of the next process for the central processing unit to execute whenever the current process is completed. This process follows a number of algorithms that are executed so as to enable the process. A CPU scheduler is the part of the operating system that is responsible for deciding what thread to be performed and at what time. This decision is made after ensuring that all the conditions for execution have been cleared. These conditions for execution include the round robin criteria, timer wait, input output wait, and thread priority.

In the previous chapter on literature review, I identified a number of shortcomings in existing studies and the gap between theoretical and practical aspects of CPU scheduling.

Having established that, this chapter will use quantitative research as the factors to be investigated can be measured in a precise manner. The research study is based on the methodology, thus plays an important role in the implementation of the study.

A simulation program is developed in order to have a better understanding of CPU scheduling between SJF preemptive and non preemptive method. Each of the simulation methods will show the CPU burst time, turnaround time, waiting time, and response time.

3.2 Quantitative research

Kothari (2004) defines quantitative research as a research which is concerned with the measurement of quantity of a phenomenon. This means that the study is concerned with an item which can be measured.

Aliaga and Gunderson (2000) explain that a quantitative research assists in explaining phenomena by collecting numerical data which are further analysed using statistical methods to reach conclusions. Such conclusions drawn from the study support the hypothesis or fail to.

Quantitative research for CPU Scheduling involves collecting data on burst time, turnaround time, waiting time and response time. These are measured in terms of the quantity of time each item takes in the scheduling.

3.3 Objective of Research

The objective of the dissertation is to determine which scheduling method; between SJF preemptive and non preemptive will maximize CPU burst time and minimize the average turnaround time, waiting time and response time.

3.4 Research Analysis

Research analysis done on the study involves analysing the different data sets that will be used in the research. The study will analyse the length and the size of the data sets. This will determine if they vary, and the differences in variances. This helps to determine the differences and the similarities of selected data sets.

3.5 Research Questions

1. What is the impact of CPU scheduling criteria for the performance of a computing system?

2. Which scheduling criteria between preempted SJF and non preempted SJF is the best for optimized CPU utilization, reduced waiting time for processes, high throughput, fast response time and short turnaround time for processes being executed by the processor?

3.6 Research Design

According to Panneerselvam (2004) a research design provides a guideline for data collection such as:

1. Choosing a research approach: the research uses simulation in the research approach. A model for preemptive and non preemptive data scheduling is designed and a simulation is run.

2. Has a sampling plan: a sample is drawn from the research from data sets and programs that assist in determining the most efficient CPU scheduling procedure, between preemptive and non preemptive CPU scheduling. The programs that the data is simulated on include: sort function, repeat function, update function, selection function, and print output function. The data sets run on these programs will be random samples for jobs that require CPU processing.

3. Designing the experiment: a design for the experiment is laid down below in setting up the simulation. A standard formula is issued by the scholars in modelling and simulations.

3.6.1 Research Approach

For the study, the approach taken will be modelling and simulation which will be used to arrive at the results of the time taken by SJF preemptive versus non-preemptive. Panneerselvam (2004) points out that simulation is either discrete or continuous. A continuous simulation is when the clock unit of the simulation can be increased continuously, while in a discrete simulation, clock unit of the simulation is increased in a discrete way. In the discrete simulation, the time of simulation can thus be increased or decreased depending on the accuracy required and the situation.

3.6.2 Sampling Plan

According to statistics, it would be tiresome and sometimes given skewed results if the total population is included in the simulation. Sampling helps in saving costs and time when conducting the study. A sample plan can either be a probability sampling plan or a non-probability plan. A random sampling method will be applied, using the probability sampling plan (Aravind & Haldar, 2010).

3.6.2.1 Simple Random Sampling

Defining a target population, sampling frame and the sample size are done carefully as this has a great impact and massive implications on the conclusions drawn from such data. The target population is determined as the frequent data sizes that require longer processing as well as those that need short time processing. The sample size is set at 10 different random data lengths. The samples are then sent to the processor at random. Data collections are randomly chosen to make a random sample of 10 jobs that require CPU processing.

[...]

Ende der Leseprobe aus 97 Seiten

Details

Titel
Program scheduling and simulation in an operating system environment
Hochschule
Massachusetts Institute of Technology
Note
A
Autor
Jahr
2011
Seiten
97
Katalognummer
V267035
ISBN (eBook)
9783656587422
ISBN (Buch)
9783656587415
Dateigröße
2401 KB
Sprache
Englisch
Schlagworte
program
Arbeit zitieren
bernard lampard (Autor:in), 2011, Program scheduling and simulation in an operating system environment, München, GRIN Verlag, https://www.grin.com/document/267035

Kommentare

  • Noch keine Kommentare.
Blick ins Buch
Titel: Program scheduling and simulation in an operating system environment



Ihre Arbeit hochladen

Ihre Hausarbeit / Abschlussarbeit:

- Publikation als eBook und Buch
- Hohes Honorar auf die Verkäufe
- Für Sie komplett kostenlos – mit ISBN
- Es dauert nur 5 Minuten
- Jede Arbeit findet Leser

Kostenlos Autor werden