UC3M

Telematic/Audiovisual Syst./Communication Syst. Engineering

Systems Architecture

September 2017 - January 2018

Chapter 11.  Threads

11.1.  Concurrent programming

Formally, concurrent computing is defined as a form of computing where executions occur in overlapping execution times that is "concurrently" - rather than running sequentially (one completing its execution before the next one).

  • One feature of a concurrent system is it makes progress without waiting for all other previous computations to finish. It is also characterized by the fact of having one or more computations progressing simultaneously.

  • As a programming paradigm, concurrent programming is a form of modular programming, called factorizing computer, where a block of code runs with the remaining.

A more informal definition would be a form of computing with different entities overlapping to improve performance. A close example is our working environment equipped with a personal computer where we have an editor, a compiler, a browser, and other applications running concurrently; i.e. they we do not expect that a task finishes to begin the next one.

Many other concurrent systems refer to examples such as airports or railway application which run concurrently to reduce execution times significantly and make a better use of an expensive infrastructure.

Let's see some of the potential benefits provided by concurrent programming:

  • Improvements in application performance. To run concurrently, applications tend to finish before they can be made a more efficient use of resources. This is quite interesting for today's multicore computer systems because concurrent infrastructures may see how their performances get improved.

  • Improved response times for input/output activities. Many applications that are blocked at the input and exit statements tend to block less when executed concurrently.

  • They allow the CPU time which is not used, it is to be available for other tasks.

  • Suitable to solve "certain" problems concurrently. There are certain problems that can be solved more efficiently concurrently. A classic example is to seek out a maze (where you can launch a thread to tour each of the roads). Other algorithms such as searching for data in a data structure may also run more efficiently with concurrent support.

One of the libraries that can work concurrently POSIX threads are available in Linux. Among all the possibilities and functionalities with which it is equipped (there are over 100 functions in pthread) include those responsible for:

  • Thread management: allowing creation and joining. It enables concurrent execution units create and manage their life cycle, returning a result at the end of execution of each thread.

  • Locks. The allow synchronized access to shared data between threads. Typically, threads avoid corrupt readings and writting (known as "race conditions").

The rest of this document explains this type of POSIX interface with guided examples.