Analysis and design of algorithms. A critical comparison of different works on algorithms


Texto Academico, 2019

14 Páginas, Calificación: 4.00


Extracto


REVIEW OF ALGORITHMS DESIGN AND ANALYSIS

Gabriel Kabanda

ABSTRACT

The paper presents an analytical exposition, critical context and integrative conclusion on the six major text books on Algorithms Design and Analysis. An algorithm is a sequence of unambiguous instructions for solving a problem, and is used for obtaining a required output for any legitimate input in a finite amount of time. Algorithms can be considered as procedural solutions to problems where the focus is on correctness and efficiency. The important problem types are sorting, searching, string processing, graph problems, combinatorial problems, geometric problems, and numerical problems. Divide-and-conquer is a general algorithm design technique that solves a problem by dividing it into several smaller sub-problems of the same type and about the same size, solving each of them recursively, and then combining their solutions to get a solution to the original problem. The brute-force approach to combinatorial problems is the exhaustive search, which generates each and every combinatorial object of the problem following a selection of those of them that satisfy all the constraints until a desired object is found. Although the main tool for analyzing the time efficiency of a nonrecursive algorithm is to set up a sum expressing the number of executions of its basic operation and ascertain the sum’s order of growth, the main tool for analyzing the time efficiency of a recursive algorithm is to set up a recurrence relation expressing the number of executions of its basic operation and ascertain the solution’s order of growth. A graph with directions on its edges is called a digraph. To list vertices of a digraph in an order such that for every edge of the digraph, the vertex it starts at is listed before the vertex it points to, is the topological sorting problem which has a solution if and only if a digraph is a dag (directed acyclic graph), i.e., it has no directed cycles. The topological sorting problem can be solved by two algorithms, one based on depth-first search; and the second based on a direct application of the decrease-by-one technique. The three major variations of decrease-and-conquer are: decrease-by-a-constant, most often by one (e.g., insertion sort) decrease-by-a-constant-factor, most often by the factor of two (e.g., binary search) variable-size-decrease (e.g., Euclid’s algorithm).

1. ANALYTICAL EXPOSITION

The paper presents an analytical exposition, critical context and integrative conclusion on the six major text books on Algorithms Design and Analysis. Algorithms form the heart of Computer Science in general. An algorithm is simply a set of steps to accomplish or complete a task that is described precisely enough that a computer can run it. Levitin, A. (2011, p.3) defines an algorithm as a sequence of unambiguous instructions for solving a problem, and is used for obtaining a required output for any legitimate input in a finite amount of time. Generally, algorithms are procedural solutions to problems. On a similar note, an algorithm is any well-defined computational procedure that takes some values as input and produces some values as output (Mount, D.M., 2003, p.2). According to Levitin, A. (2011, p.11), an algorithm design technique solves problems algorithmically as a general approach to solving problems from different areas of computing. Erickson, J. (2019) puts it more precisely that an algorithm is an explicit, precise, unambiguous, mechanically-executable sequence of elementary instructions, usually intended to accomplish a specific purpose. An algorithm is an efficient method that can be expressed within finite amount of time and space, and is independent from any programming languages. Algorithms are independent of any programming language, machine, system, or compiler. An instance of the problem is specified by an input which the algorithm solves. Algorithms can be specified in a natural language or pseudocode, and they can also be implemented as computer programs where the focus is on correctness and efficiency (Mount, D.M., 2003, p.2). The most important problem types where algorithms are used are sorting, searching, string processing, graph problems, combinatorial problems, geometric problems, and numerical problems. Among several ways to classify algorithms, there are two principal alternatives according to Levitin, A. (2005, p.38) and these are to group algorithms according to:

- the types of problems they solve
- underlying design techniques they are based upon.

Kleinberg, J., and Tardos,, E. (2005, p.1) in their book on Algorithm Design introduced in Chapter One an algorithmic problem that precisely illustrates many of the themes in Algorithm Design, called the Stable Matching Problem. According to Kleinberg, J., and Tardos, E. (2005, p.4), matchings and perfect matchings arise naturally in modeling a wide range of algorithmic problems and these recur frequently throughout the book. The Stable Matching Problem originated from the self-enforcing need to design a college admissions process, or a job recruiting process. For a given set of preferences among employers and applicants, the key question is how can we assign applicants to employers so that for every employer E, and every applicant A who is not scheduled to work for E, at least one of the following two things is the case?

(i) E prefers every one of its accepted applicants to A; or
(ii) A prefers her current situation over working for employer E.

Kleinberg, J., and Tardos, E. (2005, p.4) presented the Gale-Shapley problem where a perfect matching corresponds simply to a way of pairing off the men with the women, in such a way that everyone ends up married to somebody, and nobody is married to more than one person in such a way that there is neither polygamy nor singlehood. Generally, we perform the following types of analysis on algorithms taken on any instance of size a:

- Worst-case (the maximum number of steps)
- Best-case (the minimum number of steps)
- Average case (an average number of steps)
- Amortized (a sequence of operations applied to the input of size a averaged over time).

A particular scheme of organizing related data items is called a data structure (Levitin, A., 2011, p.25). Since algorithms operate on data, the issue of data structuring is critical for efficient algorithmic problem solving. The most important elementary data structures are the array and the linked list, which in turn are used for representing more abstract data structures such as the list, the stack, the queue, the graph (via its adjacency matrix or adjacency lists), the binary tree, and the set. An abstract data type (ADT) is an abstract collection of objects with several operations that can be performed on them (Levitin, A., 2011, p.39). Implementation of ADTs is achieved in modern object-oriented languages by means of classes.

The major focus in algorithms design is to find efficient algorithms for computational problems. An algorithm is efficient if it runs quickly when implemented on real input instances; achieves qualitatively better worst-case performance, at an analytical level, than brute-force search (Kleinberg, J., and Tardos,, E., 2005, p.33); or has a polynomial running time (Kleinberg, J., and Tardos,, E., 2005, p.35). Time complexity is a measure of time efficiency which indicates how fast an algorithm in question runs. Space complexit y measures the space efficiency, which is the amount of memory units required by the algorithm in addition to the space needed for its input and output (Levitin, A., 2011, p.42). Whereas the time complexity is calculated from the count of the number of times the algorithm’s basic operation is executed, space complexity is calculated from the count of extra memory units consumed by the algorithm. The worst-case efficiency of an algorithm is its efficiency which the algorithm runs the longest among all possible inputs of that size n (Levitin, A., 2011, p.47). The best-case efficiency of an algorithm is its efficiency which the algorithm runs the fastest among all possible inputs of that size (Levitin, A., 2011, p.48). Both time and space efficiencies are a function of the algorithm’s input size. However, the efficiencies of some algorithms may differ significantly for inputs of the same size, and so we need to draw a distinction among the worst-case, average-case, and best-case efficiencies. An algorithm’s running time may grow as its input size goes to infinity.The asymptotic orders of growth of functions expressing algorithm efficiencies are denoted by the notations O,Ω , and Θ which are are used to indicate and compare them. Without loss of generality, the efficiencies of a large number of algorithms are categorized into the following few classes: constant, logarithmic, linear, linearithmic, quadratic, cubic, and exponential (Levitin, A., 2011, p.95). Although the main tool for analyzing the time efficiency of a nonrecursive algorithm is to set up a sum expressing the number of executions of its basic operation and ascertain the sum’s order of growth, the main tool for analyzing the time efficiency of a recursive algorithm is to set up a recurrence relation expressing the number of executions of its basic operation and ascertain the solution’s order of growth.

A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)), if t (n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non-negative integer n0 such that

Abbildung in dieser Leseprobe nicht enthalten

A function t (n) is said to be in Ω(g(n)), denoted t (n) ∈Ω (g(n)), if t (n) is bounded below by some positive constant multiple of g(n) for all large n,i.e., if there exist some positive constant c and some non-negative integer n0 such that

Abbildung in dieser Leseprobe nicht enthalten

A function t (n) is said to be in Θ(g(n)), denoted t (n) ∈Θ (g(n)), if t (n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some non-negative integer n0 such that c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0 (Levitin, A., 2011, p.54).

Levitin, A. (2011, p.95) observed that inefficiency of a recursive algorithm may be masked by succinctness. When every element is equal to the sum of its two immediate predecessor, this generates a sequence of integers called the Fibonacci numbers. Fibonacci numbers can be computed using several algorithms with drastically different efficiencies. Algorithm visualization occurs when images are used to convey useful information about the algorithms (Levitin, A., 2011, p.95). Algorithm visualization occurs in the form of either the static algorithm visualization or dynamic algorithm visualization (also called algorithm animation).

[...]


Final del extracto de 14 páginas

Detalles

Título
Analysis and design of algorithms. A critical comparison of different works on algorithms
Universidad
( Atlantic International University )
Calificación
4.00
Autor
Año
2019
Páginas
14
No. de catálogo
V491406
ISBN (Ebook)
9783668983892
ISBN (Libro)
9783668983908
Idioma
Inglés
Palabras clave
analysis
Citar trabajo
Professor Gabriel Kabanda (Autor), 2019, Analysis and design of algorithms. A critical comparison of different works on algorithms, Múnich, GRIN Verlag, https://www.grin.com/document/491406

Comentarios

  • No hay comentarios todavía.
Leer eBook
Título: Analysis and design of algorithms. A critical comparison of different works on algorithms



Cargar textos

Sus trabajos académicos / tesis:

- Publicación como eBook y libro impreso
- Honorarios altos para las ventas
- Totalmente gratuito y con ISBN
- Le llevará solo 5 minutos
- Cada trabajo encuentra lectores

Así es como funciona