Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
HVM Dom0: Any Unmodified OS as dom0
Xiantao Zhang, Nakajima Jun, Dongxiao Xu
Speaker: Auld Will
Intel Corporation
Legal Disclaimer
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL® PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS.
Intel may make changes to specifications and product descriptions at any time, without notice.
All products, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice.
Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2013 Intel Corporation.
Outline
• Para-virtualized Dom0 – history & problems
• Why HVM dom0 ?
• Technologies in HVM dom0
• Call to Action
• Takeaways
3
Today’s Xen Architecture (Since Xen3.0)
Event Channel Virtual MMU Virtual CPU Control IF
Hardware (SMP, MMU, physical memory, Ethernet, SCSI/IDE)
Native Device Driver
GuestOS (XenLinux)
Device Manager & Control s/w
Dom0
Native Device Driver
GuestOS (XenLinux)
Unmodified User
Software
VM1
Front-End Device Drivers
GuestOS (XenLinux)
Unmodified User
Software
VM2
Front-End Device Drivers
Unmodified GuestOS
(Windows OS Linux))
Unmodified User
Software
VM3
Safe HW IF
Xen Hypervisor
Back-End Back-End
VT-x
32/64bit
AGP ACPI PCI
SMP
VT-d
History of PV Dom0
PV Dom0 evolution:
• …
• Xen Linux 2.6.18
• Xen Linux 2.6.27
• Linux 2.6.32 + PVOPS patchset
• PVOPS Linux pushed to upstream 3.0
Challenges:
• Tremendous effort spent on pushing patches to Linux upstream.
• Maintaining effort.
• Hard to push certain features/fixes into Linux upstream. − Xen PAT
− Xen ACPI.
− Xen RAS
5
Why was PV dom0 only?
• Problems
− Old x86 architecture (Pre-VT)
−Virtualization is not considered by design.
−Many virtualizations holes exist
− X86 architecture with 1st Gen VT
−Lack of performance optimizations
−No hardware features to support memory/IO virtualization
• Solution – PV dom0
− Modify dom0’s kernel source code to address virtualization holes.
− Adopt PV interfaces to enhance system performance.
−Network, storage, MMU, etc.
− But Linux only
6
Limitations of PV Dom0
• Dom0 kernel is modified Linux (XenLinux)
− Depends on Kernel changes
− Hard to push some changes(RAS, PAT, ACPI) to upstream Linux
− Can’t support unmodified OS (Windows, Mac OS, etc)
− Can’t leverage VT for performance enhancement
• Performance limitations of dom0
− 64bit dom0 has to suffer from poor performance
− Super page can’t be supported well
− Fast system call can’t be supported as well
− Un-avoidable various traps to hypervisor
−Thread switch
− FPU, TLS, stack switch
−MMU update operations
−Guest page faults
…
7
New Hardware Virtualization Technologies
• CPU virtualization
− Virtualization holes are finally closed by architecture
− CR access acceleration
− TPR shadow/APIC-v
• Memory virtualization
− EPT VPID – memory virtualization is done by hardware
− EPT super page supported
− Unrestricted guest
• I/O virtualization
− VT-d supports direct IO for guest
− SR-IOV
• Interrupt virtualization
− APIC-V
− Posted interrupts
8
HVM domain comparable performance with PV domain
Good chance for improving dom0
• Goal
− Remove PV dom0’s limitations
− Leverage new VT technologies to enhance dom0’s performance
• Options
− PVH dom0: Running PV kernel in HVM container, leveraging some VT technologies (e.g. EPT) to enhance dom0’s performance. Only limited to Linux OS.
− HVM dom0: Allows unmodified OS (may with PV drivers) running in HVM container with full VT technologies. Ideally, it can support any unmodified OS.
9
Our choice: HVM dom0
Xen Architecture with HVM dom0
10
Event Channel Virtual MMU Virtual CPU Control IF
Hardware
Native Device Driver
Device Manager & Control s/w
HVM Dom0
Front-End Device Drivers (e.g.
virtio)
Unmodified GuestOS
(Windows OS Linux))
Unmodified User
Software
HVM Domain
Xen Hypervisor
Linux/Window/etc.
Qemu (virtio
backend)
Ring3
Ring0
VMX Non-Root
VMX Root
VM Entry/Exit VM Entry/Exit Interrupts
Safe HW IF
Back-End Device Drivers
Benefits of HVM dom0
• More choices for Dom0 (Windows, Mac OS, etc.).
• Better performance compared with 64-bit PV Dom0.
• Reduce the Xen Linux kernel complexity and maintenance effort.
• Xen hypervisor becomes more flexible to support more usage cases.
− Desktop virtualization to benefit Windows/Mac OS users.
− New Xen client hypervisor usage.
− Mobile virtualization.
• Covers more virtualization model.
− First windows/Mac OS based type-1 open source hypervisor.
11
How to make HVM dom0 work ?
• CPU virtualization
− Same as HVM domU.
• Memory virtualization
− Adopt EPT/NPT for memory virtualization
− Super page is used for performance enhancement
• IO virtualization
− With VT-d, all physical devices are assigned to dom0 by default
− IO access & mmio access doesn’t trigger vmexits.
• Interrupt virtualization
− Dom0 controls physical IOAPIC
− Dom0’s local APIC is virtualized
− Hypervisor owns physical local APIC
12
HVM dom0 Boot Sequence
• EFI-based system as HVM Dom0 (e.g. Windows*)
− Dynamically de-privilege EFI shell to a HVM guest environment
− Boot flow:
System power on Boot EFI shell execute startup.nsh xen_loader.efi Xen entry point start_xen() construct_dom0() prepare_hvm_context() VMLAUNCH back to original EFI shell Load OS as usual
startup.nsh:
xen_loader.efi xen.gz
/boot/efi/ia32.efi
xen_loader.efi:
− Used dynamically to de-privilege EFI shell
− One EFI binary for loading Xen and setup return point from hypervisor
− After back to EFI shell, EFI environment is in a HVM guest
13
EFI is dynamically de-privileged to HVM container
HVM dom0 Boot Sequence (Cond)
• Linux as HVM dom0
− System loaded by grub
− System Power on Power on to grub Xen entry point start_xen() construct_dom0() prepare_hvm_context() VMLAUNCH to kernel entry.
14
Similar with today’s PV dom0
Multi-domain Support
Dom0 OS Back-end
Driver Event Channel/ Xen Bus Driver*
User-land tools/libs
Qemu
Linux Ready Ready Ready Ready
Windows Virtio Ready Not ready Partially Ready*
15
• Qemu is a must
− Both Linux & Windows supports Qemu
• PV Driver support
− Xen bus/event channel mechanism is needed
− One virtual PCI device (PCI platform device) is virtualized
− Port PCI platform device’s logic from Qemu to hypervisor
Call for Action
• Port (or simplify) Xen’s userland tools/libraries
− For Windows
• Enable PV drivers for DomU guests
− For performance enhancement of DomU
• Port Xen-Qemu logic to Windows
16
Takeaways
• Unmodified OS as HVM dom0
• Only an add-on feature for Xen project
− Doesn’t break existing Xen’s usage models
− Only used for new x86 platforms
• Can resolve PV dom0’s limitations
• Can cover more usage models
− New type of Xen Client (with Windows HVM Dom0)
− Create Trusted Execution Environment for single Windows/Mac OS
− Xen can be used in PC market like today’s type-2 VMMs
17
19