23
UNIVERSITY OF NIVERSITY OF MASSACHUSETTS ASSACHUSETTS A AMHERST MHERST Department of Computer Science Department of Computer Science CRAMM: Virtual Memory Support for Garbage-Collected Applications Ting Yang, Emery Berger, Scott Kaplan , Eliot Moss Department of Computer Science Dept. of Math and Computer Science University of Massachusetts Amherst College {tingy,emery,moss}@cs.umass.edu [email protected]

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science CRAMM: Virtual Memory Support for Garbage-Collected Applications Ting Yang, Emery

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science

CRAMM: Virtual Memory Support for Garbage-Collected

Applications

Ting Yang, Emery Berger, Scott Kaplan†, Eliot Moss

Department of Computer Science Dept. of Math and Computer Science†

University of Massachusetts Amherst College

{tingy,emery,moss}@cs.umass.edu [email protected]

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 2

Motivation: Heap Size Matters

GC languages Java, C#, Python, Ruby, etc. Increasingly popular

Heap size critical Too large:

Paging (10-100x slower) Too small:

Excessive # collectionshurts throughput

Heap Size (120MB)

Memory (100MB)

JVM

VM/OS Disk

Heap Size (60MB)

Memory (100MB)

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 3

What is the right heap size?

Find the sweet spot: Large enough to minimize collections Small enough to avoid paging BUT: sweet spot changes constantly

(multiprogramming)

CRAMM: Cooperative Robust Automatic Memory Management

Goal: through cooperation with OS & GC,keep garbage-collected applications

running at their sweet spot

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 4

CRAMM Overview

Cooperative approach: Collector-neutral heap sizing model

(GC) suitable for broad range of collectors

Statistics-gathering VM (OS)

Automatically resizes heap in response to memory pressure Grows to maximize space utilization Shrinks to eliminate paging

Improves performance by up to 20x Overhead on non-GC apps: 1-2.5%

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 5

Outline

Motivation CRAMM overview Automatic heap sizing Statistics gathering Experimental results Conclusion

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science

GC: How do we choose a good

heap size?

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 7

GC: Collector-neutral model

SemiSpace (copying)

a ≈ ½b ≈ JVM, code + app’s live

size

heapUtilFactor: constant dependent

on GC algorithm

Fixed overhead:Libraries, codes,

copying (app’s live size)

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 8

GC: a collector-neutral WSS model

SemiSpace (copying)

MS (non-copying)

a ≈ ½b ≈ JVM, code + app’s live

size

a ≈ 1b ≈ JVM, code

heapUtilFactor: constant dependent

on GC algorithm

Fixed overhead:Libraries, codes,

copying (app’s live size)

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 9

GC: Selecting new heap size

GC: heapUtilFactor (a) & cur_heapSize

VMM: WSS & available memory

Change heap size so that working setjust fits in current available memory

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 10

Heap Size vs. Execution time, WSS

1/x shape

Y=0.99*X + 32.56

Linear shape

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science

VM: How do we collect information

to support heap size selection?

(with low overhead)

WSS, Available Memory

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 12

d e f g h i j k l m n c k l m ncb c d e f g h i j k l m n c k l m n

aba abc defghijklmn abc defghijklm n abdefghijckl n m abc defghijk mn l abc defghijlmn k abdefghijklmn c

4

n

321 1

Calculating WSS w.r.t 5%Memory reference sequence

LRU Queue

Pages in LRU order

Hit Histogram

Fault Curve

0000000000 0000

m n

1 14

l m nk l m nc k l m na b c d e f g h i j k l m n c k l m n

5

1

1 14114

i

i ihistfault 1

Associated with each LRU position

pages

faults

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 13

WSS: hit histogram

Not possible in standard VM: Global LRU queue

No per process/file information or control Difficult to estimate WSS / available

memory

CRAMM VM: Per process/file page management

Page list: Active, Inactive, Evicted Adds & maintains histogram

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 14

WSS: managing pages/process

Active (CLOCK) Inactive (LRU) Evicted (LRU)

Major fault

EvictedRefill & Adjustment

Minor fault

Pages protected by turning off permissions (minor fault)

Pages evicted to disk. (major fault)

Header

Page Des

AVL node

HistogramPages

faults

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 15

WSS: controlling overhead

Buffer

Active (CLOCK) Inactive (LRU) Evicted (LRU)

Pages protected by turning off permissions (minor fault)

Pages evicted to disk. (major fault)

Header

Page Des

AVL node

HistogramPages

faults

control boundary: 1% of execution time

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 16

Calculating available memory

What’s “available” (not “free”)? Page cache

Policy: are pages from closed files “free”? Yes: easy to distinguish in CRAMM

(on separate list)

Available Memory = all resident application pages + free pages in the system + pages from closed files

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science

Experimental Results

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 18

Experimental Evaluation

Experimental setup: CRAMM (Jikes RVM + Linux) vs. unmodified Jikes RVM,

JRockit, HotSpot GC: GenCopy, CopyMS, MS, SemiSpace, GenMS SPECjvm98, DaCapo, SPECjbb, ipsixql +

SPEC2000

Experiments: Overhead w/o memory pressure Dynamic memory pressure

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 19

CRAMM VM: Efficiency

Overhead: on average, 1% - 2.5%

CRAMM VM Overhead

0

0.5

1

1.5

2

2.5

3

3.5

4

SPEC2Kint SPEC2Kfp Java-

GenCopy

Java-

SemiSpace

Java-

MarkSweep

Java-GenMS Java-

CopyMS

% O

verh

ead

Additional Overhead

Histogram Collection

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 20

Elapsed Time (seconds)

GenMS – SPECjbb (Modified) w/ 160M memory

stock w/o pressureCRAMM w/ pressure

# t

ran

sact

ion

s fin

ish

ed

(th

ou

san

ds)

Stock w/ pressure

stock w/o pressure

296.67 secs 1136

majflts

CRAMM w/ pressure 302.53

secs 1613 majflts

98% CPU

Stock w/ pressure 720.11

secs 39944 majflts

48% CPU

Initial heap size: 120MB

Dynamic Memory

Pressure (1)

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 21

Dynamic Memory Pressure (2)

SPECjbb (modified): Normalized Elapsed Time

JRockit

HotSpot

CRAMM-GenMSCRAMM-MSCRAMM-SS

HotSpotJRockit

# t

ran

sact

ion

s fin

ish

ed

(t

hou

san

ds)

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 22

Conclusion

Cooperative Robust Automatic Memory Management (CRAMM)

GC: Collector-neutral WSS model VM: Statistics-gathering virtual memory

manager

Dynamically chooses nearly-optimal heap size for GC applications

Maximizes use of memory without paging Minimal overhead (1% - 2.5%) Quickly adapts to memory pressure changes

http://www.cs.umass.edu/~tingy/CRAMM

UUNIVERSITY OF NIVERSITY OF MMASSACHUSETTSASSACHUSETTS A AMHERST • MHERST • Department of Computer Science Department of Computer Science 23