Upload
margaret-lang
View
219
Download
1
Embed Size (px)
Citation preview
< DNS >
DNS == Domain Name System Why do we need Domain Name?
– Domain names are easier to remember than IP addresses
– IP addresses can be dynamically changing.– IP addresses may not be unique.
Why do we need DNS?– Mapping between domain name and IP
addresses.
< DNS >
By maintaining distributed host table– Scalability !!!
How changes to [domain name – IP address] mapping will be updated?– Caching…– TTL…
< DNS >
Name resolution commands– NSLookup [ipaddress | sitename]– ping -a
Query scheme is simple– Query( domain name, RR type )– Answer( values, additional RRs )– RR == Resource record
DNS tree structure
.
edu.
cornell.edu.
cs.cornell.edu.
com. jp. us.
cmu.edu. mit.edu.
eng.cornell.edu.
foo.cs.cornell.edu A 10.1.1.1bar.cs.cornell.edu A 10.1.1.1
NS RR “pointers”
< CDN >
CDN == Content Distribution Networks Replication of web servers CDN V.S. Centralized server
– Less latency, better performance– More robust service availability
Content Distribution Network
S
ISP
BackboneISP
IX IX
S S
Site
S
ISP
S S S
ISP
S S
BackboneISP
BackboneISP
HostingCenter
HostingCenter
Sites
< CDN >
Cached CDN – cache contents on cache miss Pushed CDN – push contents up-front Issues
– Difficulty with dynamic contents– Cache performance V.S. Content synchronization.
What if lots of clients try to access the same CS?
S
ISP
BackboneISP
IX IX
S S
Site
S
ISP
S S S
ISP
S S
BackboneISP
BackboneISP
HostingCenter
HostingCenter
Sites
CS CS CS
CS
CS
C C
OS
C CCC
DNS & CDN together…
DNS load balancer Picks a server that is least overloaded and
closer to the client. DNS answer with a small TTL
– 30 seconds – one minute for fine-grained load decisions
– quickly offload a busy or even crashed content server
< UDP >
Unreliable / Out-of-order message delivery. Connection-less. Datagram based.
– Messages > MTU will be dropped.– MTU == Maximum Transmission Unit– Default ~1460bytes with Cisco routers
No flow control No congestion control
< TCP >
Reliable / In-order message delivery. Connection-oriented. Stream based
- thus no restriction on transmission size
Flow control Congestion control
TCP connection establishment
Three-way handshake– 1. SYN– 2. ACK + SYN– 3. ACK
Connection established only after all three steps.
If not, time-out.
Client (active) Server (passive)
SYN, SeqNum=x
SYN+ACK, SeqNum=y, Ack=x+1
ACK, Ack=y+1
TCP-SYN Attack
Classic DOS (Denial of Service) attack.
Attack by creating myriads of half-established connections.
TCP Sliding Window
This is how below TCP properties come to life– Reliable delivery– In-order delivery– Any size message (stream based)– Flow control
Sliding window can’t slide if messages in the window didn’t get through.
TCP Sliding Window
Advertisement of Window size via ACK Small sliding window
– Low performance due to delay waiting on ACK. – Bad with network with large RTT (Round Trip Time)
Large sliding window– Send data as a bulk, waiting ACK as a bulk.– Bad if network congestion, as bulk transfer will
make circumstance worse.
TCP Congestion Control
Interpret dropped packets as congestion Maintain congestion window size Additive Increase/Multiplicative Decrease
Time (seconds)
KB
TCP sawtooth pattern
Wireless environment
Issues– High RTT(Round Trip Time)– Message loss pattern differs from wired network
What do ‘dropped packets’ indicate?– TCP assumes congestion.– But it could be just lossy medium.
How will UDP/TCP behave on wireless?
VPN == Virtual Private Network
remote client can communicate with the company network securely over the public network as if it resided on the internal LAN
NAT == Network Address Translation
allows an IP-based network to manage its public (Internet) addresses separately from its private (intranet) addresses.
popular technology DSL or cable LANs
Network Failure
Packet drop or packet delay System Crash / halt Byzantine failure
– Some systems behaves incorrectly or unexpectedly– Could be a malicious attacker
Network Partition– Also known as “Split Brain Syndrome”– Some nodes in a cluster no longer communicate with each
other
IP Multicast
Reduces overhead for sender Reduces bandwidth consumption in network
Useful in small subnet– I.e.) virtual meeting broadcast within a corporate
network
Multicast over internet?– Mbone. (buried in the history…)
< Virtual Memory Overview >
Address Translation: Hardware converts virtual addresses to physical addresses via an OS-managed lookup table (page table)
CPU
0:1:
N-1:
Memory
0:1:
P-1:
Page Table
Disk
VirtualAddresses
PhysicalAddresses
Virtual Memory yet another picture..
Memory residentpage table
(physical page or disk address)
Physical Memory
Disk Storage(swap file orregular file system file)
Valid
1
1
111
1
10
0
0
Virtual PageNumber
Multi-Level Page Tables
multi-level page tables– Level 1 table:
1024 entries, each of which points to a Level 2 page table.
– Level 2 table: 1024 entries, each of which
points to a page
Level 1
Table
...
Level 2
Tables
Page Faults
PTE == Page Table Entry– Each entry is (pointer to physical address, flags)
If a process tries to access a page not memory Page Fault Interrupt OS exception handler “page-fault trap”
Paging and swapping
CPU
Memory
Page Table
Disk
VirtualAddresses
PhysicalAddresses
CPU
Memory
Page Table
Disk
VirtualAddresses
PhysicalAddresses
Before fault After fault
< Page replacement schemes >
FIFO – first in first out OPT - (or MIN) optimal page replacement LRU – least recently used LRU Approximation
– Mimicking LRU when no hardware support for LRU– Reference bits– Additional reference bits algorithm– Second chance algorithm
LFU – least frequently used MFU – most frequently used
FIFO and Belady's Anomaly
For some page replacement algorithms, the page fault rate may increase as the number of allocated frames increases.
OPT (or MIN)
Assumes knowledge for future requirement.– Replace the page that will not be used for the longest
period of time
Doesn’t show Belady’s anomaly But practically too difficult to implement !
LRU
Assume pages used recently will be used again– throw away page not used for longest time
Popular policy to be taken Doesn’t show Belady’s anomaly Implementation options
– Counters– Stack
Second-chance
LRU Approximation Reference Bits + FIFO
– if set, a page will be granted for second chance.– If a page used often enough, it will never be
replaced.
Implementation by “Circular Queue” Bad if all bits are set degenerates to FIFO.
LFU
Assumes pages used actively will be used again. What about a page used heavily only in the
beginning? shift count by 1 at regular intervals
Virtual Memory Programmer’s view
Large “flat” address space– Can allocate large blocks of contiguous addresses
Processor “owns” machine– Has private address space– Unaffected by behavior of other processes
Virtual Memory System’s view
virtual address space created by page mapping – Address space need not be contiguous– Allocated dynamically– Enforce protection during address translation
Multi-processing performance – Switching to other processes when servicing disk I/O
for page fault
Levels in Memory Hierarchy
CPUCPU
regsregs
Cache
MemoryMemory diskdisk
size:speed:$/Mbyte:line size:
32 B1 ns
8 B
Register Cache Memory Disk Memory
32 KB-4MB2 ns$100/MB32 B
128 MB50 ns$1.00/MB4 KB
20 GB8 ms$0.006/MB
larger, slower, cheaper
8 B 32 B 4 KB
cache virtual memory
CPUTrans-lation
Cache MainMemory
VA PA miss
hitdata
Virtual Memory + Cache
Problem?– Performs Address Translation before each cache lookup–Which may involve memory access itself (of the PTE)–We could cache page table entries…
Virtual Memory + Cache + TLB
CPUTLB
LookupCache Main
Memory
VA PA miss
hit
data
Trans-lation
hit
miss
Speed up Address translation