22
Threads Section 2.2

Threads Section 2.2. Introduction to threads A thread (of execution) is a light-weight process –Threads reside within processes. –They share one address

  • View
    241

  • Download
    0

Embed Size (px)

Citation preview

Threads

Section 2.2

Introduction to threads

• A thread (of execution) is a light-weight process– Threads reside within processes.– They share one address space, which means

they share data as well as files

• A process initially has one thread of execution– The initial thread can create multiple threads to

accomplish distinct tasks (more on this soon)

The goals• The ability for multiple threads of execution to

share a set of resources so they can work together to perform some larger task (pp. 82-83)

• The use of threads allows each one to use blocking system calls without affecting the other threads of the same process.– While one thread is blocked on I/O, another thread can

execute within the same process.– The effect is that the process completes execution more

quickly

Why use threads?

• Many applications are comprised of distinct multiple activities. Threads simplify the programming model.– Word processors, servers

• They are – more efficient to create and destroy– especially efficient in processes that have a good

mix of CPU and I/O activities.– able to exploit the processing power of multiple

CPU’s.

A multithreaded word processor

These three threads cooperate with each other

Communicates with user

Reformats document

AutoSave

A multithreaded web server

Client request for service

Rough outline of code for multithreaded web server

Dispatcher thread Worker thread

What threads offer~ web server example ~

• A single-thread web server would result in much idle time for the CPU – No other request could be served while the

thread is blocked and waiting for I/O

• Multiple threads make it possible to achieve parallelism – This improves performance

The Thread Model

Three unrelated processes, each with one initial thread

Three related threads within one process

Contrast processes with threads

Text

Process Status

Data

Stack

Kernel

File

Resources

Resources

Text Data

Thread Status

Text Data

Process Status

Process Thread Grouping related resources Executable entities

StackPC

Contrast processes with threads

Items shared by all threads within a process

Items private to each thread

Each thread has its own stack

Why does each thread need its own stack?

Procedures for manipulating threads

• thread_create– Issued by a thread wishing to create another thread

• thread_exit– Issued by a thread that is done executing

• thread_wait– Issued by a thread waiting for another thread to exit

• thread_yield– Issued by a thread voluntarily surrendering the CPU to

another thread (no time-sharing within a process)

Where are threads managed?

• Either in user space or in the kernel

• There are advantages and disadvantages to either approach.

• Originally, no operating systems supported threads, so user space libraries were developed to define threads packages.– Today, both Windows and Linux offer kernel

support for threads.

Implementing Threads in User Space

Implementing Threads in User Space

• Each process maintains its own thread table.

• If Thread A is running but must wait for Thread B to complete some work, we say that Thread A is “locally blocked”. – Thread A puts itself into a blocked state by pushing its register contents

onto the process’ thread table, searching the table for a ready thread to run, and reloading the CPU registers with the new thread’s saved register contents. The new thread now begins executing.

– These are just a few quick instructions, so very efficient.

• If a thread is done running for the time being, it calls thread_yield– The code of thread_yield saves the thread’s information in the thread table,

and then calls the thread scheduler to pick another thread to run.

– Saving the thread’s state and scheduling threads are accomplished by local procedures, and do not require a time-consuming system call.

More advantages of user-space thread management

• Each process can implement a customized thread scheduling algorithm because it knows the tasks its threads are performing.

• Even if many processes generate many threads, each process maintains its own thread table, so a large thread table in the kernel is not a potential problem.

One big problem

• How to handle blocking system calls?– There is no convenient way to ensure that, once

a thread is blocked, other threads within the same process will be able to execute.

– One solution is through the select system call• This returns information about the system and tells

the threads whether or not a read call will result in blocking

Implementing Threads in the Kernel

Implementing Threads in the Kernel

• The kernel maintains one thread table. – When a thread wants to create, destroy or block

another thread, it must make a system call.• More time-consuming than in user-space• When a thread is blocked, the kernel

chooses the next thread to run, either from the current process, or from another process.

• But, the kernel also engages in thread recycling to improve performance time.

Attempts to combine both approaches

• Hybrid implementation

• Scheduler activation

Pop-up threads

Useful in server processes. The arrival of a message (request) causes the creation of a new thread to “pop up” and handle it (instead of waking up a blocked thread). Less overhead involved.