Parallel computing attempts to solve many complex problems by using multiple computing resources simultaneously. This review paper is intended to address some of the major operating systems’ design issues for shared memory parallel computers like SMPs. Parallel computers can be classified according to the level at which the architecture supports parallelism, with multi-core and multi-processor computers The paper proceeds by specifying key design issues of operating system: like processes synchronization, memory management, communication, concurrency control, and scheduling in case of shared memory SMPs. It also elaborates some concerns of Linux scheduler, for shared memory SMPs parallel computing. The basic objective of the paper is to provide a quick overview of problems that may arise in designing parallel computing operating system.
Table of Contents
1. INTRODUCTION
2. DESIGN REQUIREMENTS --- SMP OPERATING SYSTEM
2.1 Synchronization
2.2 Scheduling
2.3 Cost of Communication
2.4 Granularity
2.5 Memory Management
2.6 Simultaneous concurrent processes – Deadlock
3. PROPOSED METHODOLOGY
3.1 Parallel Procesors’ Scheduling Problems
3.1.1 Problems in load sharing
3.1.2 Self Scheduling Problem
3.1.3 Issues in scheduling owing to parallel tasks’ inter dependencies
4. RELATED WORK
4.1 Linux Scheduler
4.2 SMP Concurrency Issues
5. CONCLUSION
6. FUTURE WORK
Research Objectives and Focus Areas
This paper aims to provide a comprehensive overview of the design challenges encountered when developing operating systems for parallel computing environments, specifically focusing on shared memory symmetric multiprocessor (SMP) systems. It analyzes key issues such as process synchronization, memory management, scheduling algorithms, and concurrency control to understand how operating systems must adapt to handle simultaneous execution of tasks effectively.
- Architectural requirements for SMP parallel operating systems.
- Mechanisms for inter-process synchronization and deadlocks.
- Scheduling challenges, including load sharing and dependency management.
- Performance impacts of communication costs and task granularity.
- Concurrency issues and resource access in multi-threaded environments.
Excerpt from the Book
2.1 Synchronization
Synchronization of parallel tasks in real time usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase [2]. There are two categories of processor synchronization: mutual exclusion and producer-consumer [7]. For mutual exclusion, only one of a number of current activities at a time should be allowed to update some shared mutable state. For producer-consumer synchronization, a consumer must wait until the producer has generated required value. Barriers, which synchronize many consumers with many producers, are also typically built using locks on conventional SMPs. Locking schemes cannot form the basis of a general parallel programming model.
Critical sections [5] are very common in parallel computing. Consider the producer-consumer example (as shown in Figure 1) to illustrate the critical section problem. A process is producing elements which are stored in a shared buffer calling a function store_ element (). The full buffer condition is controlled using a shared variable counter. On other hand, a consumer process obtains elements from this buffer by using a function obtain_element (), that also requires a critical section, therefore decreasing the counter in a unit. Apart from the access to the buffer, the producer-consumer example has another critical section on the access to the counter variable, which is also a shared resource. The code in Figure 1 has race conditions in the access to the buffer when it is either full or empty.
Summary of Chapters
1. INTRODUCTION: Outlines the shift from serial to parallel computing and categorizes computer systems based on Flynn's taxonomy while establishing the necessity of specialized operating system management.
2. DESIGN REQUIREMENTS --- SMP OPERATING SYSTEM: Examines the core technical hurdles in designing operating systems for SMP architectures, covering synchronization, scheduling, communication overhead, memory management, and deadlock prevention.
3. PROPOSED METHODOLOGY: Details the scheduling issues in parallel systems, demonstrating through mathematical modeling how load sharing and task dependencies significantly impact CPU utilization and performance.
4. RELATED WORK: Reviews existing approaches in the Linux scheduler regarding process priority, affinity, and system calls, while also illustrating the dangers of data corruption due to unmanaged concurrent access.
5. CONCLUSION: Summarizes the necessity of modifying serial operating system architectures to support the demands of concurrent job execution in future computing paradigms.
6. FUTURE WORK: Identifies the potential for expanding research into operating system design challenges for distributed memory systems, such as clusters.
Keywords
Parallel Computing, Operating System, SMP, Scheduling, Synchronization, Memory Management, Concurrency, Deadlock, Load Sharing, Multi-processor, Linux Scheduler, Process Management, Data Corruption, Task Dependency, Resource Management
Frequently Asked Questions
What is the core focus of this research paper?
The paper focuses on the design challenges and functional requirements for operating systems operating in parallel computing environments, specifically symmetric multiprocessor (SMP) systems.
Which specific areas of operating system design are addressed?
The study covers key areas including process synchronization, memory management, scheduling algorithms, inter-task communication costs, and strategies for handling concurrent resource access.
What is the primary objective of this work?
The main objective is to provide a quick overview of the problems and design modifications required to transition operating systems from serial processing models to those capable of managing parallel execution efficiently.
How is the computational problem handled in the proposed methodology?
The methodology explains that complex problems are broken down into discrete tasks, which are further divided into instructions executed simultaneously, requiring the operating system to prioritize and schedule these tasks to ensure fair distribution.
What is discussed regarding the Linux scheduler in the related work section?
The paper discusses how the Linux scheduler is adapted for SMP systems, focusing on process priority, the use of hardware cache for performance optimization, and specific system calls for scheduling management.
Which keywords best describe this study?
Key terms include Parallel Computing, SMP, Scheduling, Synchronization, Concurrency, and Resource Management.
What does the "producer-consumer" example illustrate?
It illustrates the "critical section" problem, where race conditions occur if shared resources, such as a buffer or a counter variable, are accessed simultaneously by processes without proper synchronization.
What is the impact of granularity in parallel computing?
Granularity represents the ratio of computation to communication; fine-grained tasks may lead to excessive communication overhead that outweighs the benefits of parallel computation.
Why is the "load sharing" approach prone to inefficiencies?
Load sharing can lead to situations where processors remain idle if tasks are not assigned fairly or if their execution times vary significantly, causing load imbalances.
What is the potential danger of concurrent access to shared resources?
Concurrent access without proper locking mechanisms leads to data corruption, as demonstrated by the read-modify-write conflict where updates from multiple threads are lost.
- Quote paper
- Sabih Jamal (Author), Muhammad Waseem (Author), Muhammad Aslam (Author), 2014, Operating System for Parallel Computing: Issues and Problems, Munich, GRIN Verlag, https://www.grin.com/document/273160