Skip to content

Parallel Algorithms

Course ID
CEID_NE4128
Professor
KAKLAMANIS CHRISTOS, PAPAIOUANNOU EVI
Semester
Spring
ECTS
5

Parallel computation involves multiple processors (or computers) working together for a common problem. Each processor works for part of the problem assigned to it. Processors can exchange information. Sequential computation (i.e., when there is a single processor), on the other hand, follows the Von Neumann’s stored program model.

Several processes in nature and everyday life involve parallelism, for example, weather phenomena, movement of tectonic plates, formation of galaxies, road traffic, car and aircraft construction, etc. Parallel computation is used in many areas of research and development (such as Computer Science , Mathematics, Electrical Engineering, Mechanics, Physics, Life Sciences, Biotechnology, Genetics, Chemistry, Geology, Seismology, etc.). Specifically, in Computer Science, parallel computing is used for tasks such as data mining, Web search, image processing, virtual reality, monetary and financial modeling, organization management, video and multimedia network technologies, collaborative work environments.

Parallel computation is used for overcoming the constraints imposed by sequential computation, and in particular for saving time and money, solving  problems of large size, ensuring simultaneous access to information / services as well as for exploiting non-local information (e.g. , SETI @ home).

In the context of this course, the concept, application and efficiency of parallelism in the design and execution of computational tasks (algorithms) are addressed. Problems and corresponding algorithms studied include computational tasks performed within a chip or a microprocessor such as sorting, counting, addition, multiplication, prefix counting, convolution, etc, computational tasks performed in a computer system, such as sorting, routing, etc., and other computational tasks related to communication in data networks, resource allocation, etc. Various models of fixed-connection networks, such as linear arrays, meshes, trees, hypercubes, butterflies, are studied. The proposed parallel algorithms are analyzed in terms of their speed-up against corresponding sequential algorithms, as well as in terms of their efficiency and work, i.e., how uniformly available resources are used. Asymptotic analysis is used to evaluate performance by providing upper and lower bounds. The analysis is mainly theoretical but can also be based on experimentation.

Lectures, problem solving and lab sessions proceed as follows:

 

Properties of the fixed-connection network model

Elementary sorting and counting

Integer arithmetic

Carry-lookahead addition

Prefix computation 

Carry-save addition

Multiplication and convolution

Sorting

Odd-even transposition sort on a linear array

A simple sorting algorithm on a mesh 

Improved sorting algorithm for meshes

A tight lower bound

Packet routing

Greedy algorithms 

Average-case analysis of greedy algorithms

Routing packets to random destinations

Randomized routing algorithms

Deterministic routing algorithms with small queues

Off-line routing

Other routing models and algorithms