Upload
citrix
View
3.157
Download
1
Embed Size (px)
DESCRIPTION
Personal vDisk technology has changed the way VDI architects think about use cases. Come learn about the latest best practices for the design, sizing and ongoing management of pooled VDI desktops utilizing personal vDisks. You’ll gain skills and knowledge regarding persistent, highly personalized virtual desktop deployments using Citrix XenDesktop VDI and Citrix XenClient. Highlights include an architectural overview, use cases, deployment and troubleshooting.
Citation preview
1
Personal vDisk (PVD) overview and usage -‐-‐ What is it and when to use it Deployment and Design Considera=ons -‐-‐ How to get PvD up and running PvD Management Details -‐-‐ What do I need to worry about? Troubleshoo=ng and Support Resources -‐-‐ What to do and where to go
2
3
Try to use pooled/shared, but advanced workers struggle with lack of customiza=on
4
Address the advanced user with dedicated, but now management and costs become challenging
5
Provides the management efficiency of pooled/shared/random with the personaliza=on breadth of dedicated Each user gets a workspace stored on any storage configured on hypervisor Provides complete personaliza=on
User profile and machine state maintained on new PvD disk aSached to VM Workspace only contains user changes to reduce storage requirements
6
7
When a roaming profile is not sufficient personaliza=on (aka machine seWngs such as apps, printers etc) When users need to install applica=ons that IT does NOT want to make part of the base image When IT wants to roll out apps to departments BUT do not want to make them part of the base image Anywhere you are using dedicated VDI pools you should consider PvD instead Blindly deploying PvD as the default desktop could be costly -‐-‐ Over delivering on personaliza=on capabili=es will come at infrastructure costs -‐-‐ You don’t need to provide call center employees a PvD enabled desktop
8
Copy on write: it's for things like opening a huge file, changing one block in the middle. Right now we can't relocate those files, so we discard any writes to base file content. CoW retains a bitmap of modified blocks per each file modified in the base, and we reintegrate each file on image update by merging modified blocks into the PvD copy. This is for more applica=on compa=bility and reduc=on of space usage in the PvDs, especially for AV signature/defini=ons... IIS configura=on change reten=on was a driver for this. CoW -‐-‐ If an applica=on that was wri=ng some content in the files it would go in base image file if there is no rule for that file to be copied to the pool vm. Aaer the reboot, all the changes to base image are lost so there may be applica=on compa=bility due to this as changes are geWng lost
9
10
Op=on in installer to ‘enable’ PvD … BUT PvD Installs no maSer selec=on -‐-‐ Disabled means PvD Service will be running but will remain idle -‐-‐ Enable later by running inventory You cannot disable once enabled. Would need to revert to a snapshot when PvD was disabled
11
In tes=ng with XD 5.6, the WC/PvD IOPS ra=o during login storm was about 40:60. In Excalibur that ra=o is even higher at 33:66.
12
Fundamentally the only significant difference is the increased CPU usage with PVD. There is around 3-‐6% CPU overhead of PVD when the hypervisor is consuming about 40% CPU without PVD. For e.g. If 20 users take on average 40% CPU on pooled desktops, they would take ~43-‐46% CPU with PVD. 11% Win7 user density overhead is strictly based upon Login VSI score and it does not translate to 14% less users in all the scenarios. So the customer would not lose 14% density in all the cases.
13
14
15
PvDs are created and aSached by Studio, ViaB and PvS wizards. They are formaSed during first PvD boot \UserData.v2.vhd (located on the root of the PvD)
-‐ Contain everything not in the user’s profile (aka not in c:\users) -‐ Sized according to the alloca=on split (default is 50/50) -‐ A .thick_provision sparse file exists to display the correct amount of free space to user
The VHD created using the template is mounted as P: and in that VHD has another VHD on it and that is mounted as V: and is hidden and captures the apps installed/machine state. Unfortunately this VHD on the volume is called UserData.vhd ... but it is really the machine state and not u"user data" UserData.vhd, contains only applica=ons. Perhaps a name change would be useful. The Thick_provision file is a sparse file that has no space allocated but is EOF'ed/VDL'ed to indicate it consumes the required space available. Calculated by split in space for PvD between profile and apps minus the amount currently used by UserData.vhd
16
• PvDisk created during catalog crea=on by copying UserData.VDESK.TEMPLATE from Base VM
• By default: 10GB -‐ 50/50 split for User Data / App Data
17
Separates the user profile data from the applica=on data. Either expand on the hypervisor console … Or use the PoSH script
18
(1) MinimumVHDSizeMB (default is 2 GB) (2) EnableDynamicResizeOfAppContainer (“1” by default, “0” If upgrading from Ibiza with %Split != 50) (3) PvDReservedSpaceMB (default is 512 MB) (4) PercentOfPvDForApps (default is 50) (5) EnableUserProfileRedirec=on (default is 1)
19
It will take space from the profile area but not more than 50% of free profile space (free as it was before the reboot). The amount of space that we grab from profile free space is computed as follows (1) Compute the total free space available for expansion of the VHD, Free space available on VHD + Free space available in profile area – PvDresrervedSpaceMB (2) Compute the usage ra=o of app versus profile, Usage Ra=o = VhdUsedSpace / (ProfileUsedSpace + VhdUsedSpace) (3) NewVHD size = VhdUsedSpace + UsageRa=o * FreeSpaceAvailableForExpansion Finally if increase in size is determined to be more than 50% of Free space available for expansion, we reduce the New VHD size to limit the increase to 50% of free space available for expansion.
20
PvD plays nicely with profile management solu=ons PVD can be used as a simple profile management solu=on itself
For simple environments with single ‘desktops’ PVD + Citrix Profile Manager (UPM) makes a powerful combina=on, enabling roaming profiles and persistent personaliza=on! Folder Redirec=on may s=ll be used effec=vely here as well
21
The user then has a local profile which is stored in the user’s PvD It Is protected from a reset (only app space is reset) It is aSached to that PvD … same constraints as a local profile. No roaming
22
23
When using a third party profile solu=on (such as Citrix’s User Profile Manager), then the profile is actually captured on the respec=ve network/storage loca=on. At this point, PvD just behaves as the cache for the profile. This means it is safe to delete it as part of a Director triggered reset since it will just be copied back down on next logon. This also means you can be much more aggressive with your App/Profile split for the PvD since the profile space will now only contain logs and suppor=ng data.
24
25
Support for personal vDisk -‐ This XenDesktop feature is a personaliza=on solu=on for pooled-‐sta=c virtual desktops. Profile management detects the presence of personal vDisks and adjusts its configura=on automa=cally so that profile data is wriSen to and read from the personal vDisk.
26
27
Differencing disks store changes as block-‐based differences PvD stores changes “fully”, eg, complete files/registry keys/values/etc This key difference allows PvD to retain user personaliza=on and merge changes across image updates
28
Base VM mode allows crea=on of the PvD inventory
29
Pool VM mode is when PvD is ‘in use’ by the user Inventory crea=on is not available in pool VM mode
30
PvS will run PvD inventory automa=cally as part of the auto-‐update process. For MCS in Excalibur, the image prepara=on is performed on the private copy of the snapshot that MCS takes. This happens by aSaching a VM to the copied disk and boo=ng it with a set of instruc=ons in a second disk which tell it what to do. If selec=ng a PvD catalog type this will be in two phases. The first phase performs a re-‐arm for office if it is installed, checks that DHCP is enabled on all network adapters and checks that PvD is installed. The second phase is executed if the PvD tools are found and runs the inventory genera=on Supported in XS 6.2, SCVMM 2012 & SP1, vSphere 4.1+
31
File Catalog – Loca=on 1=PvD, 0=Base KeyCatalog – Loca=on 00 00 00 00 = Base, 01 00 00 00 = PvD (first four sets of numbers) MojoControl -‐ stores the resource catalogs. It's loaded as MojoControl.dat from the VHD inside the PUD aSached to the VM. It's stored in C:\program files\citrix\personal vdisk\seWngs (unless dev changed it on me at the last minute). IVM loads this during startup. Each subkey in MojoControl is one of the resource catalogs, but they are stored inside the same hive file. ObjectCatalog is legacy, and is unused. It might have some housekeeping data in there, but I'm preSy sure it's unused aSm. RingThree is the graa point for the PVD registry (eg the registry data that changes as PVD executes and people make changes). This hive is also located in the 'SeWngs' folder and is loaded by IVM at system start. It is protected from access directly by client applica=ons at run=me (else there's a chicken-‐egg problem). IVM takes care of blending its content as required. When a reset occurs, the UserData.VHD is overwriSen with the on located here: C:\ProgramData\Citrix\personal vDisk\SeWngs — which then kicks of preparing since the service/driver recognizes it as 'new’
32
FileCatalog key is no longer valid and is not used
33
Shutdown/Reboot will trigger reminder – click cancel when prompted and update PvD inventory
34
This is where you could change the drive leSer M:. User Disk Drive leSer (P:) can be managed via Studio.
35
36
Each user’s PVD always contains a ‘master’ copy of the inventory data used when its corresponding template VHD was built. This copy is stored in V:\CitrixPvD\SeWngs\Inventory\diff\*.dat. A user-‐readable (but unused by PVD) copy is kept in the same loca=on, but with “.txt” appended (eg, V:\CitrixPvD\SeWngs\Inventory\diff\Snapshot.dat.txt). These .dat files comprise a proprietary binary database of the files/registry keys/values computed when the inventory was constructed. You will see it on both ‘C:’ and ‘P:’ -‐-‐ Folder contains: -‐ Catalog changes -‐ Current Inventory, It’s an exact copy of \ProgramData\Citrix\Personal vDisk
\Inventory -‐ Merge of c:\CitrixPvD\... and p:\CitrixPvD\... Aaer base image update \ProgramData\Citrix\Personal vDisk\Inventory is where inventory and catalog is ini=ally created when update inventory is run
37
38
39
40
41
42
43
44
45
46
PVD has been designed so that applica=ons installed while PVD is running should “just work”. PvD KMDs load in Windows “phase 1” (very early boot). This means applica=ons that install phase 0 (very very early boot) won’t work. Certain AVs, hardware drivers, etc – but these shouldn’t be installed in PvD anyway. These types of applica=ons will work fine if installed into the base VM! Pla~orm soaware should be installed in the base VM as a best prac=ce. Windows service packs and updates, etc … Applica=ons common to many users should be installed in the base VM. Office, browsers, Adobe Reader/Flash, etc …
47
Prior to virtualiza=on (very early in the boot stage) all the logs generated by PVD drivers are appended to the IvmSupervisor log of “C” drive, As soon as driver finds the PVD drive it starts wri=ng into the IvmSupervisor log located on the root of PVD drive, so that it’s not lost when “C” is reset.
48
PEBKAC -‐ Problem Exists Between Keyboard And Chair. Most of the =me this is caused by PEBKAC or allergic reac=on to applica=ons installed in the base VM
49
50
Helpdesk-‐facing PvD metrics and support: Applica=on area in use / total size along with user profile area in use / total size and the ability to perform a PvD reset
51
52
53
54
55