System Structuring with Threads System Structuring with Threads
Example: A Transcoding Web Proxy ApplianceExample: A Transcoding Web Proxy Appliance
“Proxy” Interposed between Web (HTTP) clients and servers.Masquerade as (represent) the server to the client.Masquerade as (represent) the client to the server.
CacheStore fetched objects (Web pages) on local disk.Reduce network overhead if objects are fetched again.
Transcoding“Distill” images to size/resolution that’s right for client.Encrypt/decrypt as needed for security on the Internet.
ApplianceServes one purpose only; no general-purpose OS.
clients
servers
Using Threads to Structure the Proxy ServerUsing Threads to Structure the Proxy Server
network driver
HTTP request handler
disk driver
scrubber
statsobject cache
manager
distill
encrypt logging
long-term periodic threadsgather statistics“scrub” cache for expired (old) objects
worker threads for specific objectsdistiller compresses/shrinks imagesencrypt/decrypt
device controller threadslogging threadone thread for each diskone thread for network interface
server threadsrequest handlers
Thread Family Tree for the Proxy ServerThread Family Tree for the Proxy Servernetwork driver
HTTP request handler
disk driver
scrubber
statsfile/cache manager
distill
encrypt logging
main thread; waiting for child termination
periodic threads; waiting for timer to fire
server threads; waiting on queues of data messagesor pending requests (e.g., device interrupts)
worker threads; waiting for data to be produced/consumed
Periodic Threads and TimersPeriodic Threads and Timers
The scrubber and stats-gathering threads must wake up periodically to do their work.
These “background” threads are often called daemons or sleepers.
AlarmClock::Pause (int howlong); /* called by waiting threads */Puts calling thread to sleep.Maintains a collection of threads waiting for time to pass.
AlarmClock::Tick(); /* called by clock interrupt handler */Wake up any waiting threads whose wait times have elapsed.
scrubber
stats
while (systemActive) { do my work; alarm->Pause(10000);}
Interfacing with the NetworkInterfacing with the Network
NetRcvNetTx
NIC device driver
TCP/IP protocol stack
Network Interface Card
Network Link
I/O Bus
host memory buffer pool
sending receiving
Network ReceptionNetwork Reception
receive interrupt handler
packetArrival->V()
packetArrival->P()
while (systemActive) { packetArrival->P(); disable interrupts; pkt = GetRcvPacket(); enable interrupts; HandleRcvPacket(pkt);}
interrupt
TCP/IP reception
HTTP request handler
This example illustrates use of a semaphore by an interrupt handler to pass incoming data to waiting threads.
Inter-Thread Messaging with Send/ReceiveInter-Thread Messaging with Send/Receive
network receive
HTTP request handler
while (systemActive) {object = GetNextClientRequest();find object in cache or Web server
while(more data in object) { currentThread->receive(data);
transmit data to client; }}
file/cache manager
network send
get request for object from thread;while(more data in object) { read data from object; thread->send(data);}
This example illustrates use of blocking send/receive primitives to pass a stream of messages or commands to a specific thread, connection, or “port”.
Request/Response with Send/ReceiveRequest/Response with Send/Receive
HTTP request handler
file/cache manager
Thread* cache;....cache->send(request);response = currentThread->receive();...
while(systemActive) { currentThread->receive(request); ... requester->send(response);}
The Need for Multiple Service Threads The Need for Multiple Service Threads
network
HTTP request handler
file/cache manager
Each new request will involve a stream of messages passing through dedicated server thread(s) in each service module.
But what about new requests flowing into the system?
A system with single-threaded service modules could only handle one request at a time, even if most time is spent waiting for slow devices.
Solution: multi-threadedservice modules.
Using Ports for Multithreaded ServersUsing Ports for Multithreaded Servers
HTTP request handler
file/cache manager
Port* cachePort....cachePort->send(request);response = currentThread->receive();...
while(systemActive) { cachePort->receive(request); ... requester->send(response);}
Producer/Consumer PipesProducer/Consumer Pipes
networkfile/cachemanager
char inbuffer[1024];char outbuffer[1024];
while (inbytes != 0) { inbytes = input->read(inbuffer, 1024); outbytes = process data from inbuffer to outbuffer; output->write(outbuffer, outbytes);}
This example illustrates one important use of the producer/consumer bounded buffer in Lab #3.
Forking and Joining WorkersForking and Joining Workers
/* give workers their input */distiller->Send(input);decrypter->Send(pipe);
/* give workers their output */distiller->Send(pipe);decrypter->Send(output);
/* wait for workers to finish */distiller->Join();decrypter->Join();
distiller = new Thread();distiller->Fork(Distill());
decrypter = new Thread();decrypter->Fork(Decrypt());
pipe = new Pipe();
input outputpipe
distiller decrypter
HTTP handler
A Serializer for LoggingA Serializer for Logging
disk driver
Multiple threads enqueue log records on a single queue without blocking for log write completion; a single logging thread writes the records into a stream, so log records are not interleaved.
Summary of “Paradigms” for Using ThreadsSummary of “Paradigms” for Using Threads
• main thread or initiator
• sleepers or daemons (background threads)
• I/O service threads
listening on network or user interface
• server threads or Work Crews
waiting for requests on a message queue, work queue, or port
• filters or transformers
one stage of a pipeline processing a stream of bytes
• serializers
Threads vs. EventsThreads vs. Events
Review: Thread-Structured Proxy ServerReview: Thread-Structured Proxy Servernetwork driver
HTTP request handler
disk driver
scrubber
statsfile/cache manager
distill
encrypt logging
main thread; waiting for child termination
periodic threads; waiting for timer to fire
server threads; waiting on queues of data messagesor pending requests (e.g., device interrupts)
worker threads; waiting for data to be produced/consumed
Summary of “Paradigms” for Using ThreadsSummary of “Paradigms” for Using Threads
• main thread or initiator
• sleepers or daemons (background threads)
• I/O service threads
listening on network or user interface
• server threads or Work Crews
waiting for requests on a message queue, work queue, or port
• filters or transformers
one stage of a pipeline processing a stream of bytes
• serializers
Thread PriorityThread Priority
Many systems allow assignment of priority values to threads.Each job in the ready pool has an associated priority
value;the scheduler favors jobs with higher priority values.
• Assigned priorities reflect external preferences for particular users or tasks.
“All jobs are equal, but some jobs are more equal than others.”
• Example: running user interface threads (interactive) at higher priority improves the responsiveness of the system.
• Example: Unix nice system call to lower priority of a task.
• Example: Urgent tasks in a real-time process control system.
Keeping Your Priorities StraightKeeping Your Priorities Straight
Priorities must be handled carefully when there are dependencies among tasks with different priorities.
• A task with priority P should never impede the progress of a task with priority Q > P.
This is called priority inversion, and it is to be avoided.
• The basic solution is some form of priority inheritance.
When a task with priority Q waits on some resource, the holder (with priority P) temporarily inherits priority Q if Q > P.
Inheritance may also be needed when tasks coordinate with IPC.
• Inheritance is useful to meet deadlines and preserve low-jitter execution, as well as to honor priorities.
Multithreading: Pros and ConsMultithreading: Pros and ConsMultithreaded structure has many advantages...
Express different activities cleanly as independent thread bodies, with appropriate priorities.
Activities succeed or fail independently.
It is easy to wait/sleep without affecting other activities: e.g., I/O operations may be blocking.
Extends easily to multiprocessors.
...but it also has some disadvantages.Requires support for threads or processes.
Requires more careful synchronization.
Imposes context-switching overhead.
May consume lots of space for stacks of blocked threads.
Alternative: Event-Driven SystemsAlternative: Event-Driven Systems
while (TRUE) { event = GetNextEvent(); switch (event) { case IncomingPacket: HandlePacket(); break; case DiskCompletion: HandleDiskCompletion(); break; case TimerExpired: RunPeriodicTasks(); etc. etc. etc. }
If handling some event requires waiting for I/O to complete, the thread arranges for another event to notify it of completion, and keeps right on going, e.g., asynchronous non-blocking I/O.
Structure the code as a single thread that responds to a series of events, each of which carries enough state to determine what is needed and “pick up where we left off”.
The thread continuously polls for new events, whenever it completes a previous event.
Question: in what order should events be delivered?
Example: Unix Example: Unix SelectSelect Syscall Syscall
A thread/process with multiple network connections or open files can initiate nonblocking I/O on all of them.
The Unix select system call supports such a polling model:
• files are identified by file descriptors (open file numbers)
• pass a bitmask for which descriptors to query for readiness
• returns a bitmask of descriptors ready for reading/writing
• reads and/or writes on these descriptors will not block
Select has fundamental scaling limitations in storing, passing, and traversing the bitmaps.
Event Notification with UpcallsEvent Notification with Upcalls
Problem: what if an event requires a more “immediate” notification?
• What if a high-priority event occurs while we are executing the handler for a low-priority event?
• What about exceptions relating to the handling of an event?
We need some way to preemptively “break in” to the execution of a thread and notify it of events.
upcalls
example: NT Asynchronous Procedure Calls (APCs)
example: Unix signals
Preemptive event handling raises synchronization issues similar to interrupt handling.
Example: Unix SignalsExample: Unix Signals
Signals notify processes of internal or external events.
• the Unix software equivalent of interrupts/exceptions
• only way to do something to a process “from the outside”
• Unix systems define a small set of signal types
Examples of signal generation:
• keyboard ctrl-c and ctrl-z signal the foreground process
• synchronous fault notifications, syscall errors
• asynchronous notifications from other processes via kill
• IPC events (SIGPIPE, SIGCHLD)
• alarm notifications signal == “upcall”
Handling Unix SignalsHandling Unix Signals
1. Each signal type has a system-defined default action.abort and dump core (SIGSEGV, SIGBUS, etc.)
ignore, stop, exit, continue
2. A process may choose to block (inhibit) or ignore some signal types.
useful for synchronizing with signal handlers: inhibit signals before executing code shared with the signal handler
3. The process may choose to catch some signal types by specifying a (user mode) handler procedure.
system passes interrupted context to handler
handler may munge and/or return to interrupted context
SummarySummary
1. Threads are a useful tool for structuring complex systems.Separate the code to handle concurrent activities that are
logically separate, with easy handling of priority.
Interaction primitives integrate synchronization, data transfer, and possibly priority inheritance.
2. Many systems include an event handling mechanism.Useful in conjuction with threads, or may be viewed as an
alternative to threads structuring concurrent systems.
Examples: Unix signals, NT APCs, GetNextEvent()
3. Event-structured systems may require less direct handling of concurrency.
But must synchronize with handlers if they are preemptive.