45
Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements: Tony Luck, Matt Fleming, CSIG-Intel 1

Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

  • Upload
    hadiep

  • View
    231

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Linux QoS framework usage report for containers and cloud and

challenges ahead- Vikas Shivappa, Intel

Acknowledgements: Tony Luck, Matt Fleming, CSIG-Intel

1

Page 2: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Agenda

• Problem definition• Why use Kernel QOS framework

• Intel Cache/memory qos support

• Kernel implementation

• Openstack and Container support

• Performance improvement

• Future Work

2

Page 3: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Without Cache/Memory QoS framework(quality of service)

High Pri apps

Low Pri apps

C2

Low pri apps may get more cache

Shared Processor Cache

C1 C3 Cores Cores

- Noisy neighbour => Degrade/inconsistency in response => QOS difficulties

- HPC3

Increasing cores =>Multithreading

L3 contention

Page 4: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Agenda

• Problem definition

• Why use Kernel QOS framework• Intel cache/memory qos support

• Kernel implementation

• Openstack and Container support

• Performance improvement

• Future Work

4

Page 5: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Why use the Cache/Memory QOS framework?

Threads

Architectural details of ID management/scheduling

• User friendly interfaces : Perf/cgroup

• Abstracts a lot of architectural/System level details

5

Page 6: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

With Cache QoS

- Help monitor and control shared resources => achieve consistent response => better QoS

- Cloud or Server Clusters - Containers- HPC

High Pri apps

Low Pri apps

Kernel Cache QOS framework

Intel QOS h/w support

Controls to allocate the appropriate cache to

high pri apps

Proc Cache

User space

Kernel space

h/w

6

Page 7: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Agenda

• Problem definition

• Why use Kernel QoS framework

• Intel Cache/Memory QoS support• Kernel implementation

• Openstack support

• Container support

• Performance improvement

• Future Work7

Page 8: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

What is Cache/Mem QoS ?

• Cache/Memory b/w Monitoring– cache occupancy/mem b/w

per thread – perf interface

• Cache Allocation– user can allocate overlapping

subsets of cache to applications

– cgroup interface (out of tree only, new interface coming up)

8

Page 9: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Intel QoS Terminologies

• RDT – Resource director technology

– is basically “Processor QoS” under which the cmt/cat/mbm etc are all sub-features

• CMT – Cache Monitoring Technology or also called CQM

• CAT – Cache Allocation Technology

• MBM – Memory b/w monitoring

9

Page 10: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Cache lines Thread ID (Identification)

• Cache Monitoring– RMID (Resource Monitoring

ID) PID.– RMID tagged to cache lines

allocated

• Cache Allocation– CLOSid (Class of service ID)– Restrict when Cache is filled

• Memory b/w– RMID <=> Total L3 external

b/w

10

Page 11: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Agenda

• Problem definition

• Existing techniques

• Why use Kernel QOS framework

• Intel Cache qos support

• Kernel implementation• Openstack and Container support

• Performance improvement

• Future Work11

Page 12: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Kernel Implementation

Threads

Cgroup fs

/sys/fs/cgroup perf

User interface

Cache alloc cache / mem b/w monitoring

Kernel QOS support

Intel Xeon QOS support

Shared L3 Cache

User Space

Kernel Space

Hardware

MSR Configure

bitmask per CLOS

Set CLOS/RMID for thread

During ctx switch

Allocation configuration

Read Event

counter

Read Monitored data

12

Memory

Page 13: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Memory b/w Monitoring

RMID1…RMIDn

Shared L3

Memory

CLOSID1…RMIDn

Mem Ctlr Mem Ctlr

Memory

RMID1…RMIDn

CLOSID1…RMIDn

CoresCoresCoresCores CoresCoresCoresCores

Socket0 Socket1

Local mem b/wLocal mem b/w

+ Total mem b/w

Shared L3

Page 14: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

MBM implementation continued

• Typically – sched_in

• prev_count = read_hw_count();

– sched_out• c = read_hw_count();

• count += c – prev_count;

• Wont work for MBM as we have per package RMIDs– Doing the above on 2 core siblings for a PID with

same RMID would result in duplicate count.

14

Page 15: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

MBM hierarchy monitoring

15

G1

e1 st e1 end

e2 st e2 end

0 MB 5 MB 10 MB 11 MB

Time

e1 e2Monitor

G12G11

G121

G1211 G1212

RMID2

- e1 : should read 10MB- e2 : should read 13MB- e3 : should read 5MB

e3

Share RMID1

e3 st e3 end

0MB 5MB

E2 gets RMID3

E2 loses RMID1

0 MB 2 MB1MB 2MB

Sample cgroup hierarchy

- Other considerations- Movement of tasks between

cgroups - MBM counters overflow

Page 16: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

MBM hierarchy monitoring

16

• Implement using periodic updates of the ‘per-RMID count’ as well a ‘per event count’

• This helps take care of all the scenarios

– Task movement between cgroups

– RMID recycling

– Events start counting the same cgroup at different times (they only need to read the current event count)

Page 17: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

UsageBasic monitoring per thread cache occupancy/ Memb/w

17

- Basic usage example.- Results display the total cache occupancy and total mem b/w for the

thread.

Page 18: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Other Usage modes

• Monitor cgroup

• Per socket monitoring

– --per-socket does not work as we are not cpuevent

– --per-cpu doesn’t work either

– Use –C <cpu in the socketN>

• Systemwide

– Fail if (–a && –t) option (system wide task mode)

18

Page 19: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Usage Scenarios

• Units that can be monitored for cache/memory b/w

– Process/tasks

– Virtual machines and cloud (transfer all PIDs of VM to one cgroup)

– Containers (put the entire container into one cgroup)

• Restrict the noisy neighbour

• Fair cache allocation to resolve cache contention

19

Page 20: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Agenda

• Problem definition

• Existing techniques

• Why use Kernel QOS framework

• Intel Cache qos support

• Kernel implementation

• OpenStack / Container support• Challenges

• Performance improvement

• Future Work20

Page 21: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Openstack usage

Applications

Openstackdashboard

Open Stack Services

Standard hardware

Shared L3 CacheShared L3 Cache

IntegrationCompute Network Storage

21

Page 22: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Openstack usage …

Perfsyscall/ Cgroup

OpenStack

libvirt

Virt mgr ovirt . . .

KVM Xen . . . Kernel Cache QOS

- Libvirt patches submitted (Qiaowei [email protected] ) – based on kernel QOS framework

- CAT/CMT/MBM was demoed in openstack forums/ conference

22

Page 23: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Containers support

• Dockers support patch was built to use the new CAT cgroup

• Was simpler change as dockers and systemdalready have all the plumbing to use cgroups

23

Page 24: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Cyclic tests using docker

24

- With CAT(green curve) has a more consistent response latency range comparable to the no-noise scenario (0-16)

- Most of the samples falling the 1-9.

Page 25: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

25

Baseline : NGINX web server, ext. load generation system, 2x Intel® Xeon® processor E5-2699 v4, 2.2GHz, 22c, 64GB DDR4-2133, 10Gb X540-AT2 NICs. Ubuntu14.04, Kernel v4.4 + RDT Patches. C1E / turbo disabled. CAT: Restrict “noisy neighbors” : CAT mask 0x00003. “Noisy neighbor” apps: 11 processes /skt of stream, array size 100e6. Ext Load generation system: wg/WRK running 22 thrds, Ubuntu* 14.04, 2x Intel Xeon processor L5520@ 2.27GHz CPUs, 24GB DDR3-1067 with 10Gb Intel® X540-AT2 NICs. Data Source: Appformix, March 2016

0

5

10

15

20

25

30

35

2S Intel® Xeon® processor E5-2699 v4 (No CAT)

2S Intel® Xeon® processor E5-2699 v4 (with CAT)

Improved Average Web Server Latencies

Avg

. Res

po

nse

Tim

e (m

s)

0

50

100

150

200

250

300

350

400

450

500

2S Intel® Xeon® processor E5-2699 v4 (No CAT)

2S Intel® Xeon® processor E5-2699 v4 (with CAT)

Improved Worst-Case Web Server Latencies

Avg

. Res

po

nse

Tim

e (m

s)

0

200

400

600

800

1000

1200

2S Intel® Xeon® processor E5-2699 v4 (No CAT)

2S Intel® Xeon® processor E5-2699 v4 (with CAT)

Workload: NGINX based webserver on Intel Xeon processor E5 v4, 100KB request size

Req

ues

ts P

er S

eco

nd

Cache Allocation Technology (CAT) can prioritize important

VMs – e.g., web server

NGINX* Web Server Performance

AppFormix* – Orchestration with Containers (Kubernetes)

Page 26: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

UC , Berkley CA RDT usage

26

• Network functions are executing simultaneously on isolated core’s, throughput of each Virtual Machines is measured

• Min packet size (64 bytes), 100K flows, uniformly distributed

Page 27: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

OSV adaption status

Intel RDT support status for OSVs

CMT:

RHEL 7.2 (3.10): merged

Ubuntu 15.10 (4.2): merged

SLES12 SP2 Beta (4.4): finished backporting and test, will merge

Alibaba, Baidu: Backported and in Testbed

MBM:

RHEL 7.3 RC (3.10): finished backporting and test, will merge

Ubuntu 16.04 (4.4): merged

SLES12 SP3 Beta (4.4): will submit request

Alibaba, Baidu: Backported and in Testbed

CAT, CDP :

Currently all using out of tree patches. Waiting for upstream patches

Google : using currently in testbed

Alibaba, Baidu: Backported and in Testbed

27

Page 28: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Challenges

• Openstack, Container next steps

• What if we run out of IDs ?

• What about Scheduling overhead

• Doing monitoring and allocation together

28

Page 29: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Openstack/container next steps for CAT/CDP

• kernel CAT cgroup support will remain out of tree– cgroup Pros

• openstack/dockers other enterprise users like Google could use the feature on test bed and are ready to adapt

• Was supported by much of community (Peterz/HPA/dockers/google) for quite sometime.

• Issues like hierarchy/kernel thread issue was related to cgroup.

– Cons• Thomas rejected cgroup interface eventually.• Quickly run out of CLOSIds with cgroup hierarchy, more in v2 –

However reuse had mitigated some of the issues.• Could not do per socket Closid due to atomic update issue

• Openstack and Dockers CAT support needs a rewrite to use the new CAT (resctl) interface.

29

Page 30: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

What if we run out of IDs ?

• Group tasks together (by process?)

• Group cgroups together with same mask

• return –ENOSPC

• Postpone/ Recycle

30

Page 31: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

RMID recycling

• Not really ‘virtual RMIDs’ currently as we don’t switch RMIDs at context switch.

• For cqm, cache occupancy is still tied to the RMID after we ‘free’ an RMID -> it goes to limbo list.

• However for MBM , the RMIDs can be used immediately without waiting for zero occupancy.

31

Page 32: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

RMID recycling

32

F – Free state (f- free count)L – LimboA - Allocated e – event (er- # of required RMIDs)

Page 33: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

RMID recycling accuracy

• Current scheme eg:

• The counting time is proportional to the max RMID to required RMID ratio

• Ex: 80 RMIDs max , 100 required RMIDs

– on average an event is counted for 80% of time and missed for 20% of the time

33

Page 34: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Scheduling performance

• msrread/write costs 250-300 cycles

• Keep a cache. Grouping helps !

34

Page 35: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Monitor and Allocate

• RMID(Monitoring) CLOSid(allocation) different

• Monitoring and allocate same set of tasks easily

– perf cannot monitor the cache alloc cgroup/ now resctl(?)

35

Page 36: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Agenda

• Problem definition

• Existing techniques

• Why use Kernel QOS framework

• Intel Cache qos support

• Kernel implementation

• Challenges

• Performance improvement and Future Work

36

Page 37: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Performance Measurement

• Intel Xeon based server, 16GB RAM• 30MB L3 , 24 LPs• RHEL 6.3 • With and without cache allocation comparison• Controlled experiment

– PCIe generating MSI interrupt and measure time for response

– Also run memory traffic generating workloads (noisy neighbour)

• Experiment Not using current cache alloc patch

37

Page 38: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Performance Measurement[1]

2.8x

1.5x

1.3x

- Minimum latency : 1.3x improvement , Max latency : 1.5x improvement , Avg latency : 2.8x improvement

- Better consistency in response times and less jitter and latency with the noisy neighbour

38

Page 39: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Patch status

Cache Monitoring (CMT) Upstream 4.1.

Cache Allocation(CAT)/CDP for L3 Framework (global clos/cbmmanagement, hotcpu, hsw, schedsupport) good but Cgroup Interface rejected. (Vikas, Shivappa)New resctl interface and per-socket closidsupport in progress (Fenghua, Yu)

Memory b/w Monitoring Upstream 4.6 (Vikas, Shivappa).

Open stack integration (libvirt update)Support built for CMT/MBM and CAT cgroup interface (Qiaowei [email protected])

Container support (Dockers) Support built for CAT cgroup interface( Intel)

39

Page 40: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Future Work

• Perf overhead during CQM/MBM

• Support data per-process

• Improve and unify ID management for RMID/CLOSID

40

Page 41: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

References

• [1] http://www.intel.com/content/www/us/en/communications/cache-allocation-technology-white-paper.html

41

Page 42: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Questions ?

42

Page 43: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Backup

43

Page 44: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Representing cache capacity in Cache Allocation(example)

Bn B1 B0

Wk

W(k

-1)

W3 W2 W1 W0

CapacityBitmask

Cache Ways

- Cache capacity represented using ‘Cache bitmask’- However mappings are hardware implementation specific

44

Page 45: Linux QoS framework usage report for containers and cloud ... · Linux QoS framework usage report for containers and cloud and challenges ahead - Vikas Shivappa, Intel Acknowledgements:

Bitmask Class of service IDs (CLOS)

B7 B6 B5 B4 B3 B2 B1 B0

CLOS0 A A A A A A A A

CLOS1 A A A A A A A A

CLOS2 A A A A A A A A

CLOS3 A A A A A A A A

B7 B6 B5 B4 B3 B2 B1 B0

CLOS0 A A A A A A A A

CLOS1 A A A A

CLOS2 A A

CLOS3 A A

Default Bitmask – All CLOS ids have all cache

Overlapping Bitmask (only contiguous bits)

45