Parallel computing attempts to solve many complex problems by using multiple computing resources simultaneously. This review paper is intended to address some of the major operating systems’ design issues for shared memory parallel computers like SMPs. Parallel computers can be classified according to the level at which the architecture supports parallelism, with multi-core and multi-processor computers The paper proceeds by specifying key design issues of operating system: like processes synchronization, memory management, communication, concurrency control, and scheduling in case of shared memory SMPs. It also elaborates some concerns of Linux scheduler, for shared memory SMPs parallel computing. The basic objective of the paper is to provide a quick overview of problems that may arise in designing parallel computing operating system.
Table of Contents
- Introduction
- Design Requirements
- Synchronization
- Scheduling
- Cost of Communication
- Granularity
- Memory Management
- Simultaneous Deadlock
- Proposed Methodology
- Parallel Processors' Scheduling Problems
- Problems in Load Sharing
- Self Scheduling Problem
- Issues in Scheduling Owing to Parallel Tasks' Inter Dependencies
- Related Work
- Linux Scheduler
Frequently Asked Questions
What are the major design issues for parallel computing operating systems?
Key issues include process synchronization, memory management, communication efficiency, concurrency control, and task scheduling, particularly in Shared Memory Symmetric Multiprocessors (SMPs).
How is scheduling different in parallel systems?
Scheduling in parallel systems must account for inter-dependencies between tasks, load sharing across multiple processors, and problems like "self-scheduling" where tasks manage their own execution timing.
What is the "simultaneous deadlock" problem?
It refers to a situation where multiple processes or processors are waiting for each other to release resources at the same time, stalling the entire system, which is a critical design requirement to avoid.
What concerns does the Linux scheduler have regarding SMP parallel computing?
The paper elaborates on how the Linux scheduler manages shared memory SMPs and the specific challenges it faces in optimizing task distribution and minimizing communication costs.
What is the role of granularity in parallel computing?
Granularity refers to the ratio of computation to communication. Proper granularity is essential for system performance, as high communication overhead can negate the benefits of parallel processing.
- Arbeit zitieren
- Sabih Jamal (Autor:in), Muhammad Waseem (Autor:in), Muhammad Aslam (Autor:in), 2014, Operating System for Parallel Computing: Issues and Problems, München, GRIN Verlag, https://www.grin.com/document/273160