of 69 /69
Caching Parallel Computational Models Other Topics in Algorithms Wednesday, August 13 th 1

Caching Parallel Computational Models Other Topics in Algorithms Wednesday, August 13 th

Embed Size (px)

DESCRIPTION

Caching Parallel Computational Models Other Topics in Algorithms Wednesday, August 13 th. Announcements. PS#6 due tonight at midnight Winners of the Competition: Shir Aharon Rasoul Kabirzadeh Alice Yeh & Marie Feng Extra Office Hours on Thursday & Friday - PowerPoint PPT Presentation

Text of Caching Parallel Computational Models Other Topics in Algorithms Wednesday, August 13 th

PowerPoint Presentation

CachingParallel Computational ModelsOther Topics in AlgorithmsWednesday, August 13th 1Announcements2PS#6 due tonight at midnightWinners of the Competition:Shir AharonRasoul KabirzadehAlice Yeh & Marie FengExtra Office Hours on Thursday & FridaySemih: Th 10am-12pm (Gates 424)Billy: Friday 1pm-5pm (Gates B24)Mike: Th 3pm-5pm (Gates B24)Yiming: Fr: 5pm-7pm (Gates B24)Outline For TodayCachingOther Algorithms & Algorithmic Techniques Beyond CS 161Parallel Algorithms (Finding Min, Bellman-Ford)Linear ProgrammingOther Topics & Classes3Alan Turing: Father of Computer Science4

Computer Science5Studies the powers of machines.Fundamental Question CS asks: What is computable by machines?Turing, along with Church and Godel, was the person who made computation something we can mathematically study.Turings Answer To What Computation Is (1936)6On Computable Numbers, with an Application to the Entscheidungsproblem:We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditionsTurings Answer To What Computation Is7Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book. The behavior of the computer at any moment is determined by the symbols which he is observing, and his " state of mind " at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite.Turings Answer To What Computation Is8Let us imagine the operations performed by the computer to be split up into "simple operations" which are so elementary that it is not easy to imagine them further divided. Every such operation consists of some change of the physical system consisting of the computer and his tape. We may suppose that in a simple operation not more than one symbol is altered. Turings Answer To What Computation Is9Besides these changes of symbols, the simple operations must include changes to the observed squares. I think it is reasonable to suppose that they can only be squares whose distance from the closest of the immediately previously observed squares does not exceed a certain fixed amount. The Turing Machine10Its paradoxical that as humans in our quest to understand what machines can do, we have been studying an abstract machine that in essence imitates a human being.

Church-Turing Thesis11Central Dogma of Computer Science:**Whatever is computable is computable by the Turing machine.**There is no proof of this claim. Turing In Defense Of His Claim:12All arguments which can be given are bound to be, fundamentally, appeals to intuition, and for this reason rather unsatisfactory mathematically.The arguments which I shall use are of three kinds.(a) A direct appeal to intuition.(b) A proof of the equivalence of two definitions (in case the new definition has a greater intuitive appeal).(c) Giving examples of large classes of numbers which are computable.Algorithm = Turing Machine13When we say there is an algorithm computing shortest paths of a graph in O(mlog(n)) times we really mean:There is a Turing Machine that computes the shortest paths of a graph in O(mlog(n)) operations.Turing Machine Is A Very Powerful Machine14CS tries to understand the limits of TM. We limit/extend TM and try to understand what can be computed by it.Limit the # times its head is allowed to move left/right and it changes states to poly-time. => poly-time algsWhat if the machine had access to a random source => randomized algorithms.What if there were multiple heads on the tape => parallel algorithmsLimit the length of its tape. => space-efficient algsWhat if the head was only allowed to move right => streaming algorithmsOutline For TodayCachingOther Algorithms & Algorithmic Techniques Beyond CS 161Parallel Algorithms (Finding Min, Bellman-Ford)Linear ProgrammingOther Topics & Classes15Online Algorithms16Takes as input a possibly infinite stream.At each point in time t make a decision based onwhat has been seen so far but without knowing the rest of the inputType of Optimality Analysis: Competitive RatioWorst (Cost of online algorithm)/(Cost of OPT) ratios against any input streamWhere OPT is the best solution possible if we knew the entire input in advanceCaching17

Slow DiskFast CachePage1PagekR3R2R1O.w (miss), send request to disk, put the page into cache.Q: Which page to evict?If page is in cache (hit) reply directly from cacheCaching18Input: N pages in disk, and stream of infinite page requests.Online Algorithm: Decide which page to evict from cache when its full and theres a miss.Goal: minimize the number of misses.Idea: LRU: Remove the Least Recently Used pageLRU with k = 319412153441132451LRUmissLRU with k = 320412153441132451LRU4LRU with k = 321412153441132451LRU4missLRU with k = 322412153441132451LRU41LRU with k = 323412153441132451LRU41missLRU with k = 324412153441132451LRU412LRU with k = 325412153441132451LRU412hitLRU with k = 326412153441132451LRU412missLRU with k = 327412153441132451LRU512LRU with k = 328412153441132451LRU512missLRU with k = 329412153441132451LRU513LRU with k = 330412153441132451LRU513missLRU with k = 331412153441132451LRU543LRU with k = 332412153441132451LRU543hitLRU with k = 333412153441132451LRU543missLRU with k = 334412153441132451LRU143so and so forthCompetitive Ratio Claim35Claim: If the optimal sequence of choices for a size-h cache causes m misses. Then, for the same sequence of requests, LRU for a size-k cache causes misses

Interpretation: If LRU had twice as much cache size as an algorithm OPT that knew the future, it would have at most twice the misses of OPT.Note will prove the claim for

Proof of Competitive Ratio36Recursively break the sequence of inputs into phases.Let t be the time when we see the (k+1)st different request.Phase 1: a1 at-1Let t` be the time we see the (k+1)st different element starting from atPhase 2: at at-1412153441132451Proof of Competitive Ratio37412153441132451k=3Phase 1Phase 2Phase 3Phase 4By construction, each phase has k distinct requests.Q: At most how many misses does LRU have in each phase?A: k b/c even if it evicted everything in the k+1st item, it would have at most k misses.Proof of Competitive Ratio38412153441132451Phase 1Phase 2Phase 3Phase 4Q: Whats the minimum misses that any size-h cache must have in any phase?A: k-h b/c k distinct items will be in the cache at different points during the phase, so at least k-h of them must trigger misses.Therefore the CR: k/k-h Q.E.D.Outline For TodayCachingOther Algorithms & Algorithmic Techniques Beyond CS 161Parallel Algorithms (Finding Min, Bellman-Ford)Linear ProgrammingOther Topics & Classes3940Parallel AlgorithmsQuestion: Which problems are parallelizable, which are inherently sequential?Parallelizable: Connected components, sorting, selection, many computational geometry problems all have parallel algorithms(Believed To Be) Inherently Sequential (P-complete): DFSHorn-satisfiabilityConways Game of Life, and others.412 Common Computational ModelsModel 1: Shared Memory (PRAM):Single MachineMemoryCPU1CPU2CPUkEach Time Step, each processor:Can read a location of memoryCan write to a location of memoryQ: How much time does it take to solve a computational problem with polynomial # processors?42Example 1: Finding the min in an array49235MemoryCPU(1,2)CPU(1,3)CPU(4, 5)1234500000There are n(n-1) processors, one for each pair (i, j)Initially allocate an array of size n all 0Step 1: Each cpu(i, j) compares i and j If i < j, then write 1 to j o.w. write 1 to location iStep 2: return the item whose output memory location is 043Example 1: Finding the min in an array49235MemoryCPU(1,2)CPU(1,3)CPU(4, 5)1234511011OutputA[3]=244Model 2: Distributed Memory Machine 1Machine 2Machine kInput Partition1Input Partition2Input Partitionk35810111293760Each machineperforms local computationsend/receives messages to/from other machinescan be synchronously or asynchronouslyQ: How much communication is necessary?Q: How many synchronizations is necessary?Recap: Bellman-Ford45 v, and for i={1, , n} P(v, i): shortest s v path with i edges (or null) L(v, i): w(P(v, i)) (and + for null paths)L(v, i) = minL(v, i-1)minu: (u,v)E : L(u, i-1) + c(u,v)46Example: Distributed Bellman-FordACDFBE53-62-1247Example: Distributed Bellman-Ford0ACDFBE53-62-1248Example: Distributed Bellman-Ford05ACDFBE53-62-1249Example: Distributed Bellman-Ford0587-1ACDFBE53-62-1250Example: Distributed Bellman-Ford10587-1ACDFBE53-62-1251Example: Distributed Bellman-Ford10507-1ACDFBE53-62-1252Example: Distributed Bellman-Ford10507-1ACDFBE53-62-12Termination When No Vertex Value Changes53Parallel AlgorithmsParallel Computing: CS 149A systems course. Teaches parallel technologies but also algorithms.Outline For TodayCachingOther Algorithms & Algorithmic Techniques Beyond CS 161Parallel Algorithms (Finding Min, Bellman-Ford)Linear ProgrammingOther Topics & Classes5455Linear Programming (CS 261/361)Optimization Problem of following structure:maximize x1 + x2subject tox1 + 2x2 12x1 + x2 1x1 0x2 056Geometric InterpretationConstraint 2= 2x1 + x2 1 x1x2Constraint 1= x1 + 2x2 1 Feasible Solution Set**Opt Solution**57Linear Programming ApplicationsTons! Lots and lots of problems can be solved or approximated with LP!Vertex CoverSet CoverLoad BalancingLots of problems in manufacturing/operations research/finance, etc..Covered in CS 261/361**Invented By George Dantzig.**Outline For TodayCachingOther Algorithms & Algorithmic Techniques Beyond CS 161Parallel Algorithms (Finding Min, Bellman-Ford)Linear ProgrammingOther Topics & Classes5859Approximation Techniques (including LP, SP: Semidefinite Programming)Many more Approximation AlgorithmsApproximation Algorithms (CS 261/CS 361)60Commonly appearing general randomization techniques.Probabilistic MethodMarkov Chains**Chernoff Bounds**Very very cool/elegant algorithms!Randomized Algorithms (CS 365)61Competitive RatioAverage-Case AnalysisInstance OptimalitySmoothed Analysis, and othersBeyond Worst-Case (CS 369) 62Resource Lower Bounds on Computational ProblemsAlgorithms study what can be computed and with how much resource?Complexity studies what cannot be computed with how much resource?Turing Machines/P-NP/Circuit-Complexity/Randomized-Complexity/Polynomial Hierarchy/Space Complexity/PCP/One-way FunctionsComplexity Theory (CS 254)63Power of Quantum Computers over Classic ComputersQuantum TeleportationQuantum CircuitsQuantum Algorithms: Fourier Transform/Factoring/Search, etcQuantum Error CorrectionQuantum Computing (CS 259Q)64 SequencingGenome assemblyGene samplingGene findingGene comparisonGene regulation, etcComputational Genomics (CS 262)65Geometric modeling of physical objectsSearch in high-dimensional spacesTriangulation of a set of pointsImage Processing/Graphics/VisionComputational Geometry (CS 268)

66Design and Analysis of Computational Problems In Strategic EnvironmentsKey Analysis Concept: Price of AnarchyAlso Studies Complexities of Finding EquilibriaAuctionsTraffic RoutingMulti-agent Systems, and othersAlgorithmic Game Theory (CS 364)67Randomized Algorithms (CS 365) (Chernoff Bounds)Linear Programming (CS 261/361)Complexity Theory (CS 254)My Recommendations For Future Classes68Tim RoughgardenKeith SchwarzBilly, Mike & YimingAcknowledgements69Thank you!Good luck in the final!