16
A JVM for the Barrelfish Operating System 2nd Workshop on Systems for Future Multi-core Architectures (SFMA’12) Martin Maas (University of California, Berkeley) Ross McIlroy (Google Inc.) 10 April 2012, Bern, Switzerland

A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

A JVM for the Barrelfish Operating System2nd Workshop on Systems for Future Multi-core Architectures

(SFMA’12)

Martin Maas (University of California, Berkeley)Ross McIlroy (Google Inc.)

10 April 2012, Bern, Switzerland

Page 2: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Introduction

I Future multi-core architectures will presumably...I ...have a larger numbers of coresI ...exhibit a higher degree of diversityI ...be increasingly heterogenousI ...have no cache-coherence/shared memory

I These changes (arguably) require new approaches forOperating Systems: e.g. Barrelfish, fos, Tessellation,...

I Barrelfish’s approach: treat the machine’s cores as nodes in adistributed system, communicating via message-passing.

I But: How to program such a system uniformly?

I How to exploit performance on all configurations?

I How to structure executables for these systems?

Page 3: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Introduction

I Answer: Managed Language Runtime Environments (e.g.Java Virtual Machine, Common Language Runtime)

I Advantages over a native programming environment:I Single-system imageI Transparent migration of threadsI Dynamic optimisation and compilationI Language extensibility

I Investigate challenges of bringing up a JVM on Barrelfish.I Comparing two different approaches:

I Convential shared-memory approachI Distributed approach in the style of Barrelfish

Page 4: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Outline

1. The Barrelfish Operating System

2. Implementation StrategyI Shared-memory approachI Distributed approach

3. Performance Evaluation

4. Discussion & Conclusions

5. Future Work

Page 5: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

The Barrelfish Operating System

I Barrelfish is based on the Multikernel Model: Treatsmulti-core machine as a distributed system.

I Communication through a lightweight message-passing library.

I Global state is replicated rather than shared.2.2. THE BARRELFISH OPERATING SYSTEM 25

Hardware

Software

User Mode

Kernel Mode

CPU

CPU Driver

Monitor

Dispatcher

CPU

CPU Driver

Monitor

Disp Disp

CPU

CPU Driver

Monitor

Dispatcher

App AppDomain

Figure 2.3: Interactions between Barrelfish’s core components

from the rest of the system. This allows Barrelfish to support heterogeneoussystems, since the CPU driver exposes an ABI that is (mostly) independentfrom the underlying architecture of the core.

Monitor: The monitor runs in user-mode and together, the monitors acrossall cores coordinate to provide most traditional OS functionality, such asmemory management, spanning domains between cores and managing timers.Monitors communicate with each other via inter-core communication. GlobalOS state (such as memory mappings) is replicated between the monitors andkept consistent using agreement protocols.

Dispatchers: Each core runs one or more dispatchers. These are user-level thread schedulers that are up-called by the CPU driver to perform thescheduling for one particular process. Since processes in Barrelfish can spanmultiple cores, they may have multiple dispatchers associated with them, oneper core on which the process is running. Together, these dispatchers form the“process domain”. Dispatchers are responsible for spawning threads on thedi↵erent cores of a domain, performing user-level scheduling and managing

Page 6: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Implementation

I Running real-world Java applications would require bringingup a full JVM (e.g. the Jikes RVM) on Barrelfish.

I Stresses the memory system (virtual memory is fully managedby the JVM), Barrelfish lacked necessary features (e.g. pagefault handling, file system).

I Would have distracted from understanding the core challenges.

I Approach: Implementation of a rudimentary Java Bytecodeinterpreter that provides just enough functionality to runstandard Java benchmarks (Java Grande Benchmark Suite).

I Supports 198 out of 201 Bytecode instructions (except wide,goto w and jsr w), Inheritance, Strings, Arrays, Threads,...

I No Garbage Collection, JIT, Exception Handling, DynamicLinking or Class Loading, Reflection,...

Page 7: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Shared memory vs. Distributed approach

Shared memory

JVMrun func on

obj A obj B obj C obj DHeap

Domain

Distributed Approach

JVM0 JVM1

JVM2 JVM3

move object

move object ack

invokereturn

putfield

putfield ack

obj A obj B obj C obj D

Page 8: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

The distributed approach

jvm-node0 jvm-node1 jvm-node2 jvm-node3

×obj request

obj request

×obj request

obj request response

invoke virtual

getfield

getfield response

invoke return

Object

lookup

Methodcall

blocks

Page 9: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Performance Evaluation

I Performance evaluation using the sequential and parallel JavaGrande Benchmarks (mostly Section 2 - compute kernels).

I Performed on a 48-core AMD Magny- Cours (Opteron 6168).

I Four 2x6-core processors, 8 NUMA nodes (8GB RAM each).

I Evaluation of the shared-memory version on Linux (usingnumactl to pin cores) and Barrelfish.

I Evaluation of the distributed version only on Barrelfish.

I Compared performance to industry-standard JVM (OpenJDK1.6.0) with and without JIT compilation.

Page 10: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Single-core (sequential) performance

I Consistently within a factor of 2-3 of OpenJDK without JIT.4.5. MULTI-CORE PERFORMANCE 71

0 10 20 30 40 50 60 70

Crypt

FFT

HeapSort

LUFact

Series

SOR

SparseMatmult

8.75

67.29

16.34

9.85

21.08

40.47

14.42

8.55

54.89

16.19

9.74

18.22

43.02

12.85

4.83

27

5.43

3.45

17.89

20.01

6.24

0.28

4.4

0.39

0.13

11.78

1.02

0.77

Execution time in s

OpenJDK (JIT)

OpenJDK (No JIT)

JVM (Linux)

JVM (Barrelfish)

Figure 4.9: Application-benchmarks on a single core (SizeA)

While these results do not represent new findings, they are important forevaluating the success of the project. They show that the developed JVMprovides the performance necessary to run real-world software, both on Linuxand Barrelfish.

4.5 Multi-core performance

Performance on multiple cores was evaluated using the parallel SparseMatmultbenchmark from the JGF benchmark suite. The benchmark was chosen sinceit stresses inter-core communication and does not use Math.sqrt(), whichexhibits di↵erent performance on Barrelfish and Linux (Figure 4.8).

Page 11: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Performance of the shared-memory approach

I Using the parallel sparse matrix multiplication Java Grandebenchmark JGFSparseMatmultBenchSizeB.

I Scales to 48 cores as expected (relative to OpenJDK).

0.1

1

10

100

0 5 10 15 20 25 30 35 40 45 50

Execution tim

e in s

(lo

g s

cale

)

Number of cores

JGFSparseMatmultBenchSizeB

Barrelfish JVM (Barrelfish, Shared Memory)Barrelfish JVM (Linux)

OpenJDK (No JIT)OpenJDK

Page 12: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Performance of the shared-memory approach

I Quasi-linear speed-up implies large interpreter overhead.

I Barrelfish overhead presumably from agreement protocols.4.5. MULTI-CORE PERFORMANCE 75

0

5

10

15

20

25

30

35

40

45

50

0 5 10 15 20 25 30 35 40 45 50

Speed-u

p

Number of cores

JGFSparseMatmultBenchSizeB

LinuxBarrelfish (Shared)

Figure 4.13: Average speed-up of the shared-memory approach

run-time of the experiments (which would have been up to 25h otherwise).Since the benchmark only measures the execution of the kernel, this gives avery close approximation for the run-time of JGFSparseMatmultBenchSizeA,divided by 10. I call this benchmark JGFSparseMatmultBenchSizeA*.

Cores Run-time in s � (Standard deviation)4

1 2.701 0.00172 457.893 7.89063 395.681 3.54534 402.342 7.61615 444.382 2.12776 514.251 36.7697 1764.32 247.748 2631.27 335.9016 9333.87 (only executed once)

Table 4.1: Results of JGFSparseMatmultBenchSizeA*

4Since some executions exhibited a high variance, � is given for this experiment.

Page 13: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Performance of the distributed approach

I Distributed approach is orders of magnitude slower thanshared-memory approach.

I Sparse Matrix Multiplication is a difficult benchmark for thisimplementation: 7 pairs of messages for each iteration of thekernel (almost no communication for shared-memory).

I Overhead arguably caused by inter-core communication(150-600 cycles) and message handling in Barrelfish.

Cores Run-time in s σ (Standard deviation)1 2.70 0.0022 458 7.8913 396 3.5454 402 7.6165 444 2.1286 514 36.777 1764 247.78 2631 335.916 9334 (only executed once)

Page 14: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Performance of the distributed approach

I Measuring completion time of threads on different cores showsperformance limitation due to inter-core communication.

I All data ”lives” on the same home node (Core #0).

I Cores 0-5 within a single processor, 6 & 7 is off-chip.

76 CHAPTER 4. EVALUATION

The results show that without optimisation, the distributed approach is tooslow to be feasible, at least for this benchmark. Measuring the run-time ofeach individual thread gives evidence that this is caused by the overhead ofmessage passing: While a thread running on the home node of the workingset (jvm-node0) completes very quickly, threads on other cores take orders ofmagnitude longer (Figure 4.14). The diagram also confirms that communi-cation with cores on other chips (#6 and #7) is significantly more expensivethan on-chip communication (Figure 4.3).

400 800 1,200 1,600 2,000 2,400 2,800

#0

#1

#2

#3

#4

#5

#6

#7

0.36

453.24

452.06

453.52

452.58

470.05

2,386.94

2,478.17

Execution time in s

Thre

ad(r

unnin

gon

jvm-node*)

Figure 4.14: Run-times of individual threads

For this particular benchmark, the distributed JVM has to exchange 7 pairsof messages for each iteration of the loop in Listing 4.1 (1 getfield, 1 astore,5 aload), while the shared-memory approach requires almost no inter-corecommunication (all arrays reside in the local cache most of the time andthere is little contention, since di↵erent threads write to di↵erent parts ofthe output array). There are two basic aspects that add to the overhead ofthe message passing:

• Inter-core communication: Each message transfer has to invokethe cache coherence protocol, causing a delay of up to 150-600 cycles,depending on the architecture and the number of hops [12].

• Message handling: The client has to yield the interpreter thread,poll for messages, execute the message handler code and unblock theinterpreter thread. This involves two context switches and a time in-

Page 15: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Discussion & Future Work

I Preliminary results show that future work should focus onreducing message-passing overhead and number of messages.

I How can these overheads be alleviated?I Reduce inter-core communication: Caching of objects and

arrays, like a directory-based MSI cache-coherence protocol.I Reduce message-passing latency: Hardware support for

message-passing (e.g. running on the Intel SCC).

I Additional areas of interest:I Garbage Collection on such a system.I Relocation of objects at run-time.I Logical partitioning of objects.

I Future work should investigate bringing up the Jikes RVM onBarrelfish, focussing on these aspects.

Page 16: A JVM for the Barrel sh Operating Systemparlab.eecs.berkeley.edu/sites/all/parlab/files/Maas.pdf · Implementation I Running real-world Java applications would require bringing up

Questions?