Programming Paradigms for Concurrency Part 2: Transactional Memories Vasu Singh vasu.singh@ist.ac.at

Preview:

Citation preview

Programming Paradigms for Concurrency

Part 2: Transactional Memories

Vasu Singh

vasu.singh@ist.ac.at

Computing

Ubiquitous

Ever increasing performance!

Everyone expects computers to get faster

But, what triggers this perennial performance gain?

Moore's Law

Number of transistors per chip doubles once every two years

Moore's Law

Hoped to continue to until around 2015

However,

To get more speed, a processor needs higher chip frequency too

Around 2003, chip manufacturers hit the “heat wall”

Heat wall: Further increase in frequency would destroy the processor due to excessive heating (heat dissipation is cubic in chip frequency)

Alternative to a faster processor

Instead of making one processor faster, add more processors (keeps heat low, and the processor green)

But, how do we make a program faster with multiple processors?Do multiple things in parallelA paradigm shift!This course is about this paradigm shift

Writing Sequential Programs: Easy

X = X + 1 ; Y = Y + 1 ; Z = Z + 1

Correctness: At the end of the program, the variables X, Y, and Z must be incremented by 1.

Writing Parallel Programs: Harder

The programmer must divide work for different processors

The workers are known as “threads”

Examples: X := X + 1 || Y := Y + 1X := X + 1 || X := X + 1

Correctness of Parallel ProgramsThe effect of the program should be as if all threads

executed sequentially

When threads do not share dataX:= X+1 || Y:=Y+1 At the end of the program X and Y should be incremented by 1Easy to guarantee

When threads share dataX := X+1 || X:=X+1At the end of the program X should be incremented by 2Not so easy to guarantee: need to make sure that threads do

not interfere

Concurrency

Different threads may work on the same data

Concurrency: How the threads should interact so that they do not produce unexpected results

Two paradigms: Shared memory concurrency (Pavol and I)Message passing concurrency (Thomas)

We focus on shared memory concurrency now

Synchronization

Parallel programs demand synchronization: a discipline for the threads to access shared variables

Lack of synchronization leads to errors, commonly known as “concurrency bugs”

For example: in (X:=X + 1 || X:=X + 1), when X is initially 0, we can get X=1

Demo

Sequential Bank Account

Synchronized Parallel Bank Account

Unsynchronized Parallel Bank Account

Lock based Synchronization

While using locks, you have to guarantee a few more things:

Mutual exclusionStarvation freedomDeadlock freedom

Alternative programmer-friendly technique

The programmer marks program fragments as transactions

Demo

Transactional memory

A piece of hardware/software that guarantees that program fragments marked as “transactions” do execute atomically

How does a TM work?

PARALLELHARDWARE

TRANSACTIONALMEMORY

PROGRAMTHREADS

• The threads executes transactions.

• The transactions interact with the hardware via the transactional memory.

• The transactional memorykeeps track of all accessesin the different transactions.

• If accesses of two threads conflict, the TM aborts the transactions.

Problems with TM

Hard work pays: Performance of your program may not scale as with fine-grained locking

Speculative execution: Several I/O issues inside transactions (remember, transactions may abort, and so a transaction should not produce any output until it is sure to commit)

And many more…

That's what this course is for!

Course OutlineNovember 11: History of TM (dates back to

1991!)

November 18: Correctness properties in TM, Examples, STM

November 25: Formal Semantics of transactional programs

December 2: Performance issues in implementing STM

Projects

Seminar based (Study a set of coherent papers and summarize in a presentation and a report)

Implementation basedImplementation: task-driven efficient TMVerification: model checking, runtime

verification

Contact me, and we decide together

Thank You

Recommended