38
This document and parts thereof are the sole property of Aidan Finn and may not be copied, reproduced, shared or distributed in any shape or form, physical or digital, without the sole and express permission of Aidan Finn.  Aidan Finn 04 February 2009 http://joeelway.spaces.live.com/ An Introduction to Hyper-V An introduction to the Windows Server 2008 machine virtualisation solution from Microsoft

Hyper-V 1

  • Upload
    valmon

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 1/38

This document and parts thereof are the sole property of Aidan Finn and may not be copied, reproduced, shared or 

distributed in any shape or form, physical or digital, without the sole and express permission of Aidan Finn.  

Aidan Finn

04 February 2009

http://joeelway.spaces.live.com/

An Introduction to Hyper-V

An introduction to the Windows Server 2008 machine virtualisation solution from Microsoft

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 2/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   2

Table of ContentsIntroduction ............................................................................................................................................ 4

Introducing Machine Virtualisation ........................................................................................................ 6

The Costs of Legacy Server Deployments ........................................................................................... 6

What Is Virtualisation? ........................................................................................................................ 8

Server Virtualisation ........................................................................................................................ 8

Desktop Virtualisation ..................................................................................................................... 9

Presentation Virtualisation ........................................................................................................... 10

Virtual Desktop Infrastructure ...................................................................................................... 10

Application Virtualisation .............................................................................................................. 11

Hardware Virtualisation ................................................................................................................ 12

Wrapping up the Introduction .......................................................................................................... 12

Hyper-V Details ..................................................................................................................................... 13

Hyper-V Architecture ........................................................................................................................ 14

System Requirements ....................................................................................................................... 18

Licensing ........................................................................................................................................ 18

Hardware Requirements ............................................................................................................... 20

Guest Support ............................................................................................................................... 21

Sizing Of Hyper-V Hosts ................................................................................................................ 21

Application Support .......................................................................................................................... 24

Hyper-V Features .............................................................................................................................. 24

Storage .......................................................................................................................................... 24

High Availability ............................................................................................................................ 24

Licensing ........................................................................................................................................ 26

Maximum Configurations ............................................................................................................. 28

VM Processors............................................................................................................................... 29

Storage Controllers ....................................................................................................................... 30

VM Disks ........................................................................................................................................ 30

Backups ......................................................................................................................................... 31

Networking .................................................................................................................................... 32

CD/DVD ......................................................................................................................................... 33

Overhead of Virtualised Resources ............................................................................................... 33

The Host OS Installation Type ....................................................................................................... 33

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 3/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   3

Deploying Operating Systems ............................................................................................................... 35

FUD ........................................................................................................................................................ 36

The Future ............................................................................................................................................. 37

Summary ............................................................................................................................................... 38

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 4/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   4

IntroductionIt’s been a while since I’ve written anything like this outside of work. There was a time when I was

churning out a document on some product or other every couple of weeks. Not only was it good for

generating a profile for me but it was a great way to force myself to learn something new. Then I

joined a company where I was snowed under with a major deployment project. I found myself 

managing a blade and SAN installation that had gone way behind schedule and it was my job to

rescue it. What really whetted my appetite was the opportunity to learn and deploy a large VMware

ESX and VI3 network with a complex network. I learned a lot in that project, most important of all

the lessons being that a competent technical person should be involved at the very start of the

project and that project management doesn’t happen by itself. But the downside was that I had no

time to write anything.

As I was working on that project I attended a series of Seminars that Microsoft Ireland ran called

“The Longhorn Academy”. For six months we learned about the new server product, Windows

Server 2008 while it was still a beta and release candidate. One of the sessions focused on

something new from Microsoft. It was a hypervisor based virtualisation product aimed squarely at

competing with VMware’s ESX. We all guffawed at the audacity of this. All sorts of rumours whirled

around about this “Hyper-V”. My virtualisation project was due to finish long before Hyper-V was

released so I put it to the back of my mind ... but not forgotten. We finished that VMware project

just a week behind my revised schedule. In fact, the VMware deployment really only took 3 days

once I had the licenses to install. Much of the credit had to go to the excellent administrator course

that VMware has developed. It was a fantastic grounding.

After leaving that company I started writing some chapters for two books on Windows Server 2008.

It was good to be writing again. The lead author in question was a great teacher. I both dreaded

and looked forward to his notes on my contributions knowing that I’d learn something new and have

a substantial amount of rewriting to do.

I now moved onto another job where the directors wanted to deploy a mission critical virtualisation

platform. We were all agreed that VMware was the correct way to go. We were going to take our

time with this one. In the meantime, I had infrastructure that I needed to deploy. If you know me,

you’ll know I’m a sucker for Microsoft’s System Center management suite. Optimised Infrastructure

where the network manages itself and Dynamic Systems Initiative (DSI) where IT should be agile,

proactive and flexible to business needs are both possible. I’d proven that in a previousadministration role where 3 of us managed 170+ servers around the world. We spent around 3

hours a day doing “operations” work and the reset doing project based engineering. So we installed

Operations Manager 2007 SP1 to manage the existing systems. As I expected it impressed

management and our clients.

In the meantime I was busy working in the Irish IT community. I’d started the Windows User Group

and I was spending a lot of time talking to Microsoft people or spending time at their sessions. This

new Hyper-V product had progressed substantially. Sure, it still didn’t have everything that VMware

had to offer but what I liked was the strategy. Microsoft believed their differentiator was inclusive,

top-to-bottom (hardware, virtualisation, operating system, services and application) and cradle-to-grave management. Operations Manager would take care of health and performance.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 5/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   5

Configuration Manager could be the deployment and auditing tool. Free solutions such as WDS and

WSUS would add patch management and OS deployment. And a new version of Virtual Machine

Manager (2008) would manage the virtualisation platform much in the same way that Virtual Center

does for ESX (both are additional licenses you must purchase). VMM would integrate with OpsMgr

to tie together administration with health and performance. And I would later learn that hardware

manufacturers such as HP and Brocade would add “Pro Packs” for adding hardware management for

dynamic performance optimisation in Hyper-V.

After some long and deep thinking it was decided that we would go with Hyper-V as our

virtualisation solution. I immediately started working with a release candidate version of the

product, eagerly anticipating the release to manufacturing which happened several months after the

release of Windows Server 2008. I made the most out of my TechNet license and started working

my way towards an infrastructure design that I’d be happy with. Eventually we deployed a Hyper-V

cluster managed by VMM 2008 and OpsMgr 2007 SP1.

Over the months I’d gathered lots of notes on how to do things, working around issues and what is

supported and what’s not. Microsoft seemed to think that blogs = documentation and facts about

using Hyper-V were hard to find. After many months I had put together a collection of facts, tips and

workarounds which I’d used for some presentations. I started thinking that maybe it was time I

started writing again. So that’s where this document came from. 

I’m hoping, but not promising, that this will be the first of a series of documents dealing with Hyper-

V and related technologies. There’s no schedule, no plans of content ... and no publisher sending me

emails about deadlines. I’ll be writing where and when I have time and opportunity. So here it is.

This document will introduce the reader to the concepts of not just server, but machine,virtualisation. I’ll talk about the benefits and the challenges faced by an organisation looking at

deploying a machine virtualisation platform. Then I’m going to talk about the features of and the

system requirements of Hyper-V on Windows Server 2008 as well as the free to download and use

Hyper-V Server 2008. If you’re a techie and familiar with these subjects then this won’t be a good

read for you; but keep an eye open on my blog and RSS feed because I hope to write much more for

you in later documents.

I’m writing these documents much for the same reason that I’ve written previous documents such as

my guide to software deployment using Configuration Manager 2007. Quite simply, Microsoft has

failed to document their products. We’re expected to buy books that are based on unrealisticscenarios from MS Press and attend MOC training courses that are little more than introductions

and marketing sessions. Microsoft’s team members and some PSS engineers do their best by

blogging but blogs are not documentation. This (hopefully) series of documents will centralise what

I’ve learned so far into easy to access bite-sized chunks and with some luck will save you time by not

having to trawl through endless websites and blog posts.

If you do find this useful I would appreciate it if you spread the word to others and link to where I

have stored it on my site. Who knows, I may meet you somewhere and buy you a drink to say

“thanks”! 

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 6/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   6

Introducing Machine VirtualisationI want to talk a little about the problems we’ve had with legacy server deployments and the hidden

costs of owning those servers. Then we’re going to talk a bit about what virtualisation is and how it

can be used to resolve those issues.

The Costs of Legacy Server Deployments

Let’s go back in time to when I did a typical server deployment in 2003. In terms of hardware, it

wasn’t anything special or unique. I reckon that what I’m going to describe is a story you’ll be

familiar with.

We were spinning off a new company and were in a position where we needed to start deploying

our own new server network. We had branch offices all around the world. We knew how to

architect a network but our biggest challenge was that we didn’t know how big we were going to

get. In an Irish market we were a big company. Globally or Microsoft-wise we were a small/medium

business (SME). We had the typical computer room. We went out and bought servers. For fault

tolerance we were spreading our servers across two racks. This meant we had A+B power, A+B

networking and A+B racks. The computer room was pre-populated with lots of empty racks and we

thought that would be fine for many years. We were a finance company. If you’ve been there then

you know that servers pop up like mushrooms overnight. After 1 year my boss had to get in

architects and specialist engineers to expand our computer room. I switched to blade servers and

SAN storage from the typical rack server and DAS. Despite using denser computing we still

swallowed up space like a black hole.

Looking back on that experience I can see many things that I could have done better. I now have an

expanded responsibility in a different industry so I have a greater appreciation for the business

concerns of what I did back in 2003. Let’s quickly break down the negative impacts on IT and the

business of this typical server deployment.

  Time to deploy servers in response to business needs was slower than it could have been.

We had a pretty express purchase requisition system but we still had paperwork to push and

a tendering/ordering/delivery process that could take anything from 2 to 6 weeks. Extreme

cases could take even longer.

  We used Automated Deployment Services to deploy server images. Imaging like this

drastically reduced the time to deploy a server from 2 days down to 3 or 4 hours. We had

HP DL320’s, DL360’s, DL380’s and ML370’s as our standard servers (depending on role) as

well as adding blades at the end of my time there. We then saw the move towards G4

servers. What did all this mean? We had unique drivers for all these machines and this

mandated having many images. That’s a lot of work and maintenance that detracts from the

benefits of imaging solutions, particularly for organisations that don’t deploy servers on a

frequent basis.

  We ate up rack space and, logically, floor space. Rent for office space is a big expense.

Converting normal office space into physically secure and environmentally controlled

computer room space is a huge and pricey operation.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 7/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   7

  Lots of physical servers mean that you consume more electricity. The electricity is

effectively converted into heat at the back of the servers and that heat must be managed by

power consuming cooling systems, even in rather modest 1 or 2 rack computer rooms. I’ve

read that a typical 1 or 2U server has the same power demand as a car over 1 year. If the

greens get their way then that means carbon taxation could whack the server owners with a

big stick.

  Administrative costs are many. With more physical equipment there’s more that can break

and probably needs pricey maintenance contracts. There are more cables to manage too – 

that’s one of my pet hates. 

  Hardware is a restricted boundary. We can’t just pull disks from one generation of 

equipment and put them into another generation. Replacing generations of hardware is a

painful rip-and-replace operation requiring operating system skills, networking skills and

application specific skills. And what about server breakages? We’d love to cluster

everything but that’s a huge expense, even for corporations. Clustering also complicates

deployment and management because a cluster is much more complicated (mainlyhardware-wise when if using Windows Server 2008 because Failover Clustering is easy with

that OS).

  Being in the finance industry, the Irish regulator mandated that we had to have a Disaster

Recover (DR) site in which we had duplicate server capabilities. This can be a basic process

of transporting backup tapes and recovering to leased hardware. This is a gamble because

you never know what’s going to be in that box of chocolates an d your iron level recovery

from tapes probably won’t work. This requires using a second backup/recovery product to

image the servers and do driver substitution. That wouldn’t work for us because we had to

be able to invoke the DR site in 4 hours. This meant we had a live replication site. That

includes an extension of the WAN, network equipment, servers, operating systems,

applications and replication software. This all has to reside somewhere and consume more

space and power, usually a hosting facility where you can pay for services based on rack or U

space and power consumed.

  When I managed this server network and I managed it using Microsoft System Center. I

used Microsoft Operations Manager 2005 to manage health and performance. With the

built in performance monitoring and reporting I could see that our average CPU utilisation

was no more than 8%. Only a handful of the Windows servers ran more than that, e.g. Citrix

MetaFrame, a simulation processing cluster and a SQL cluster.

Every single one of the above is a cost that can be translated into money lost by the business.

Remember that owning a server is much more than buying the thing. The power cost alone for 1

year may equal the cost of buying the hardware. The business is impacted because IT is slower to

respond to change and administrative costs are higher than they would ideally be ... IT is seen to be

“wasting” time dealing with essential technical issues that the business doesn’t understand or even

see.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 8/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   8

I’ve not seen the report for myself but I’m told that Gartner reckons that the worldwide average

server CPU utilisation rate is 8%. A term that was bouncing around in 2004 was “Server

Consolidation”. Using this and the knowledge that we’re under using servers, couldn’t we put more

applications onto one server? This is where veteran Windows administrators (and probably other

operating systems) start screaming. There are all sorts that can go wrong here. From lack of support

between server application vendors, clashing DLL’s, backup/recovery and more frequent

maintenance windows can make this a complete nightmare. You also shouldn’t forget that it’s

common to grant administrative rights to a server to the administrators of the application. This can

be a problem when different applications have different administrators and this breaks the “Chinese

Walls” of regulatory compliance. We Windows Server administrators don’t use the “1 application =

1 server” model because we have MS shares. We do it because it promotes server and business

service performance, stability, security and availability.

A solution for these problems has existed for many years. It had a strong presence in the test and

development market and a niche part of the market had been using it in production. Those bleeding

edge consumers ensured by their usage and demands for improvements that machine virtualisation

would start to gain widespread market acceptance in 2006. By 2008 the idea of server virtualisation

had become so accepted that it was being advertised on the radio.

What Is Virtualisation?

The concept of virtualisation is much bigger than it used to be just a couple of years ago. Here’s an

attempt at giving you a definition.

Virtualisation is the process of abstracting something from its physical or software installation

restrictions or boundaries so that it is more portable, flexible and accessible. This can be done to

either make services possible that previously weren’t so or to get more utilisation from less

resources.

Reading back over that again makes me think I sound like someone who’s trying to avoid an answer.

The reason that it’s very general is because virtualisation is much more than you possibly think it is. I

figure you already have a definition in mind that relates to VMware ESX, Citrix Xen or Microsoft

Hyper-V. It’s much bigger than that. 

Server Virtualisation

This is the process of using software that runs on a server to simulate many servers on that single

piece of hardware. This allows you to install server operating systems (and related services) onthose virtual machines (VM) knowing that they have independent security boundaries, identities,

RAM, storage, drivers and network access. All of this is made possible by the virtualisation

software.

Each virtual machine is referred to as a guest or child partition. Each physical machine that stores

and runs the guests is referred to as a host or parent partition. The guest exists on the host only as a

few files. The common system across many vendors solution is that there will be two files. The first

one will describe the configuration of the virtual server such as storage definition, RAM

configuration, etc.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 9/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   9

The virtual machine consumes physical RAM, disk I/O and storage space; in fact it consumes slightly

more than the same specification of physical machine. However the guest consumes a share of the

host machine’s CPU resources. This means we can safely get 10 of those 8% CPU utilisation

machines onto a host as long as it has sufficient RAM, disk I/O and storage space.

If you logged into the virtual machine you wouldn’t know it was a virtual server without checking itsdrivers. The drivers are specific to the virtualisation vendor and are completely unrelated to the

underlying hardware. You can browse the local drives of the virtual machine without any direct

access to the physical machine’s storage or that of any of the other virtual machines. 

There are two basic architectures, each of which will have derivations depending on the visor. The

hosted solution is something like the free VMware Server. It’s a piece of software that runs on top

of the host machines operating system. It’s a very accessible and easy to use solution that can be

deployed on pretty much any piece of hardware as long as it has enough resources. However, it’s

not very efficient because each operation of the virtual machine is passing through two operating

systems instead of one. The level of security provided by the virtualisation system is not necessarily

the best that is possible – the virtualisation system and the virtual servers run as applications on the

host server. If you’re familiar with computer science theory then you know that this ring on the CPU

isn’t the most protected one and thus create potential (albeit slim) to attack one virtual server from

another as if it was another application in Windows, e.g. MS Word attacking MS Access.

A hypervisor is a very thin layer of software that resides on a piece of computer hardware. It can

replace the operating system and virtualisation application to create a server virtualisation

environment. It runs in a very low and privileged ring on the processor and this offers two benefits.

The first is that each virtual server operation passes through only its own operating system and a

very thin hypervisor. Secondly, because we’re lower in the ring stack, the virtual server has much

better security. In fact, with Hyper-V, a virtual machine has the same security on the CPU as it would

if it was a physical machine.

The 3 big names in server virtualisation are VMware ESX, Citrix XenServer and Microsoft Hyper-V, all

of which are hypervisors. There are some free but limited variants including ESXi and Hyper-V Server

2008.

Desktop Virtualisation

This is probably the kind of machine virtualisation that gained acceptance first in the market place.

It’s a very similar technology to server virtualisation except that it’s run on desktop and laptopcomputers to create simulated hardware. An application runs on the host computer to create virtual

machines into which you can install an operating system. The GUI is designed to be a lot simpler to

use for non-administrators. As of now, there is not a hypervisor solution available to the public but

that’s being worked on by the likes of Citrix and probably a few others.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 10/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   10

The reasons for using this technology can include:

  Access to a testing or development environment that can be quickly developed and reset to

a known healthy point.

  Provide a legacy operating system for running old business applications.

  Allow administrators to have a secure and portable “toolkit” where they can log in with full

privileges for administration work while their physical desktop is used purely for office and

communications work.

The two big names in this market are Microsoft’s free but lightly featured Virtual PC and VMware’s

fully featured (but requiring a purchase) Workstation. Virtual PC is good for those requiring an

economic compatibility or toolkit solution. VMware Workstation is excellent as a test, development,

lab and demonstration environment.

Presentation Virtualisation

You’ve probably used this type of solution before without thinking of it as virtualisation.Presentation virtualisation allows more than one user to log into separate sessions on a single

server. So instead of running more than one virtual machine on one piece of hardware we have

more than one user logged into the physical server. You’ll know this better as Terminal Services.

Many companies have built solutions on top of this such as Citrix and 2X.

The basic concept is that the end user uses either a slim dedicated hardware device (a terminal) or a

software client on a PC to create an interactive desktop session on a centralised server. A dedicated

protocol such as Microsoft’s Remote Desktop Protocol (RDP) or Citrix’s Independent Computing

Architecture (ICA) allow the programs to run on the server while relaying the GUI, sound and

keyboard/mouse instructions between the client and the server.

Reasons for using presentation virtualisation include:

  You can abandon the rat race of upgrading PC’s every 3 to 5 years and switch to dumb

terminals, e.g. dedicated devices or converted PC’s, that require less power, have longer

lives and are plug’n’play replaceable. 

  Have a single point of administration to reduce administration costs.

  Have a single point for sharing business data, thus resolving one of the complications of inter

branch office collaboration.

  Replace complicated VPN technology with simple and secure SSL web-based access topresentation virtualisation.

Virtual Desktop Infrastructure

VDI takes the communications protocols and access mechanisms of presentation virtualisation and

server virtualisation to create an alternative user working environment. The server virtualisation

technology creates virtual machines (VM’s) that desktop operating systems can be installed into. A

user can use either a dedicated device (a terminal) or a software client to log into those virtual

machines. This brings about the benefits of presentation virtualisation. However, some of the

complications of presentation virtualisation are avoided:

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 11/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   11

  Users have a familiar working space, i.e. a dedicated virtual PC instead of a locked down

session.

  Applications that aren’t Terminal Services friendly will work as normal. 

  Helpdesk work doesn’t require complicated and time consuming change control procedures,

e.g. a simple change on a Terminal Server affects everyone but a change inside a virtual

machine only affects that user.

These solutions are based upon existing hypervisor server virtualisation technologies. They are

referred to as VDI brokers and include offerings from Citrix, VMware and Provision Networks among

others. So far, it looks like there will be a VDI connection broker included in Windows Server 2008

R2 which is due either in late 2009 or early 2010.

Application Virtualisation

Application virtualisation is the process of packaging an application or set of applications and

delivering them to a PC or Terminal Server to allow them to be used. Before I go on further there

are two things to note in that sentence:

1.  Packaging is the process of identifying the components of an application so that we can

deliver it or remote it in a bundle. This includes the files, registry keys and configuration that

make up the application. Special utilities are used to do this. Most of the process is

automated but there’s still a little bit of skill, experience and retries involved in getting it

right.

2.  You’ll see that I didn’t say that the applications were installed. I said that they were

delivered.

When the application is delivered the user can run the application. The application runs in its ownbubble or sandpit that is isolated from the operating system and other applications. Operations

such as copy and paste are not affected. This allows you to keep the operating system clean and

healthy. It allows administrators to only install and application once. It allows otherwise

incompatible applications such as Office 2000 and Office 2007 to be used at the same time on a PC

or Terminal Server. The delivery process offers two options:

1.  The entire bundle is delivered to the user’s device. This is necessary for mobile users. 

2.  Only the core components that required are initially delivered, e.g. shortcuts. Other

components are delivered or streamed as required. This minimises the footprint consumed

on the computer.

Microsoft has App-V in this space. It was formerly known as SoftGrid from Softricity. Unfortunately,

Microsoft has decided to only make this available as an additional purchase to the few customers

that acquire software assurance for their desktops. This is rather regrettable. Citrix includes

application streaming in their presentation virtualisation and VDI products to simplify application

deployment.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 12/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   12

Hardware Virtualisation

There are all kinds of virtualisation technologies used by hardware manufacturers to simplify

deployment, change and to minimise downtime. This include abstracting MAC addresses, SAN fabric

World Wide Names (WWN’s) and LUN’s on storage.  

Wrapping up the IntroductionAs you can see, there are lots of virtualisation technologies out there and I probably haven’t covered

everything – no, there’s no need to let me know about something I missed. Everyone has rightly

jumped on the bandwagon thanks pretty much to the hard work done by VMware in the when this

stuff wasn’t popular. And I can’t forget Citrix who toiled away in the 90’s with presentation

virtualisation. We’ve a large menu to choose from and if we choose carefully we can put together

the right architecture for us that decreases our costs long term, increases IT agility, reduces our

carbon footprint, increases the ability of worldwide users to collaborate and increases up time.

Imagine this design. A server network made up of both dedicated physical servers and Hyper-V

hosts. Those hosts are clustered which allows the child partition VM’s to fail over from one host to

another in case of hardware failure or host maintenance. Terminal Servers run on physical hosts to

grant most users access to a centralised session environment. VDI is used for those users with

special requirements and those virtual machines run on Hyper-V. App-V is used to deploy

applications to both the VDI VM’s and the Terminal Servers. Most of the application servers run as

virtual machines. The hardware being used includes a brand of virtualisation to allow automated

server hardware failover, a kind of RAID for servers. Storage virtualisation is used to simplify disk

management on a SAN and to allow disk to be provisioned more quickly.

The challenge is in managing these new layers we’ve injected into the network. We’ve only so many

skilled administrators and so much time. We’re already struggling to keep up so what better to

manage the network than the network itself? That’s Microsoft’s answer. System Center builds

vendor specific (HP, Dell, Citrix, Microsoft and others) into the network to manage the network.

That’s why Microsoft believes they have a leading solution on the market. And honestly, it’s why I

advised my employers to go the Microsoft route when adopting a virtualisation solution. Everything

I described above is what we are able to do today; it’s not some “in the near future” fanciful tale.

But enough of all that! You didn’t download this document to read about App-V, XenDesktop or

ESXi. For the rest of this document I’m going to focus on Hyper-V, Microsoft’s hypervisor product

that is available in Windows Server 2008 and the free Hyper-V Server 2008.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 13/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   13

Hyper-V DetailsFor me this is the most important section of this document. Ever since Hyper-V made it first public

appearance back in 2007 as a pre-release test product, there have been a lot of rumours, false

information and biased commentary about it. What can it do, is it enterprise ready, what’s good,

what’s bad, how does it compare with the competition? When I evaluated Hyper-V originally I was

happy with and comfortable with VMware ESX and VI3. But I felt it necessary to give Hyper-V a shot.

I started reading. Obviously the material you get from Microsoft is biased because they’re selling

the product. Then there was the ... I don’t want to get into a religious war here so please forgive me

... the fundamentalist VMware camp.

Let me repeat it: I think VMware ESX is excellent. For some scenarios I think it is the right choice. For 

others I think Hyper-V is the right choice.

I trawled through the net for facts and figures. Almost everything that criticised Hyper-V came from

a biased opinion rather than being an objective comparison or critique. The more I learned about

Hyper-V, the easier it got to tell who was being fair. Then one morning in the summer of 2008 I got

an email from our MD asking me if the facts on a web page comparing ESX with Hyper-V were true. I

started reading and I could tell this person hadn’t seen anything of Hyper-V since 2007. The facts

were all really, really wrong. That was one very long reply that I had to write to my boss, knocking

each of the incorrect comments off. My advice is this:

If you are evaluating Microsoft Hyper-V VS Citrix XenServer VS VMware ESX then do not trust any

one source. Apply that to me too. You probably don’t know me from Adam. I might have an

agenda and so might any of the other commentators. Get yourself a decent server that is capable of 

running any of the “big 3” (check the hardware requirements of all 3 and find something common,

e.g. a HP DL360 G5) and try evaluation editions of the products. Look for the features you need and

test them out. Ask for advice from different sources, not just the consultant who’s knocking on your

door to sell you a service. Remember that everyone trying to sell you something has an agenda

which may or may not coincide with your requirements. Beware lazy reporters and commentators.

I’ve encountered many professional looking articles that were flat out wrong. It appeared that these

“professionals” hadn’t even used the products they were reviewing. And a final tip is to check the

dates that any document or web page was written because they may not be current, e.g. anything

written about Hyper-V in Windows Server 2008 before March of 2008 is not current.

Enough of that blathering! You didn’t download this document for all that. Let’s get down to seeing

what this Hyper-V thing is made of and what it can do for you. We’ll also have a look at a few

limitations, some of which Microsoft are dealing with and will be dealt with soon or in the next

release in Windows Server 2008 R2.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 14/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   14

Hyper-V Architecture

Hyper-V is a hypervisor virtualisation solution. This means that there is a very thin piece of software

between the virtual machines and the hardware that they are hosted on. Let’s have a closer look at

hypervisors by starting out with one you may already be familiar with.

Hardware

Hypervisor

Drivers

VM1 VM2 VM3

DriversDriversDrivers

Monolithic Hypervisor 

The above is what’s referred to as a monolithic hypervisor. The hypervisor sits on top of the

hardware and provides and interface to the hardware to the virtual machines. There’s a positive and

a negative to this approach associated with where the drivers reside. The drivers for the hardware

reside in the hypervisor. This means that the vendor of the software must have drivers for the

hardware you want to use. Therefore the vendors have relatively small and very tightly controlled

hardware compatibility lists (HCL’s). The positive on this is that they hammer those drivers with

heavy testing to give you a predictable experience. However, if there is something wrong then you

can get into a “he said/she said” argument between the vendors of the guest operating system, the

hypervisor and the hardware if there is an issue with those drivers.

Let’s jump quickly into computer theory. Where does the operating system normally execute? It

normally runs at ring 0 of the processor in privileged mode, making it very secure. That’s where the

monolithic hypervisor runs. This means that the VM’s run at a higher ring and their contained

operating systems run nowhere near where they would run on a bare metal or dedicated physical

server. Performance is compromised, not to mention security (in theory).

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 15/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   15

Hardware

Hypervisor

VM1 VM2 VM3

Microkernelised Hypervisor 

Microsoft went with a microkernelised hypervisor for Hyper-V. There are no drivers in the

hypervisor. I’ll show you later where they reside but the good news is that any hardware on the

Windows 2008 HCL can provide drivers for Hyper-V. There’s a couple of other things to check sodon’t go shopping yet. The benefit of this is that you have a huge wide of hardware you can employ

for virtualisation. This can vary from enterprise servers to laptops but only if all the requirements

are met. We’ll talk about those in a little while. 

Hyper-V’s hypervisor runs at Ring -1 (that’s minus one) thanks to CPU assisted virtualisation. That’s

one of the two BIOS requirements that I mentioned earlier. You must ensure it is (a) possible to turn

it on in your BIOS (assuming the CPU supports it) and (b) you have actually turned it on and booted

up afterwards. The virtual machine runs at Ring 0. The guest’s operating system runs at exactly the

same level as a virtual machine as it would on a dedicated physical server, maintaining performance

and security integrity.

Now let’s dig a little deeper and see some of the internals of how Hyper-V works.

Hardware

Windows Server 2008

Newly Installed Server 

Hyper-V is a role in Windows Server 2008. If you’re new to Windows Server 2008 then check out my

guide to Server Manager to learn more about roles, role services and features

(http://tinyurl.com/cv5vlw). To get started we install a copy of Windows Server 2008 with Hyper-V .

Those last two words are important. If you buy Windows Server 2008 Standard, Enterprise or Data

Center editions then there are two types you can buy; one with and one without the right to use

Hyper-V. The cost difference is around $28. Don’t worry; we’re not going to end up with some

virtualisation software sitting on top of Windows as you’ll see next. 

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 16/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   16

Hypervisor

Hardware

Windows Server 2008(Parent Partition)

New Hyper-V Server 

We’ve now enabled the Hyper-V role and rebooted the server. The hypervisor is installed and is

slipped underneath Windows as long as certain features of the BIOS are enabled. We’ll talk about

those once we’ve covered the architecture. The host operating system is now referred to as theparent partition in Microsoft-speak.

ChildChild

Hypervisor

Hardware

Parent

Populated Hyper-V Server 

Once you’ve got Hyper-V up and running you can add virtual machines. To use Microsoft-speak,

you’ve added child partitions. To be honest, the only time you’ll use the term child partition is when

you’re reading something published by Microsoft, sitting a Microsoft exam or reading or writing an

annoying whitepaper. Virtual machine is the phrase the world uses and has been happy to use since

machine virtualisation technology was created. In fact, any of the Microsoft tools you’ll use to

manage virtual machines say things like “Add Virtual Machine”, etc. But you should still be aware of 

what a parent and child are.

Now let’s dig even deeper and find out where those pesky drivers reside and how virtual machines

interact with the hardware.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 17/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   17

VSP’S VSC’S

Hypervisor

Hardware

Enlightenments

VM Bus

Windows KernelServer Core

Windows KernelDrivers

Kernel Mode

User Mode

Applications

Virtualisation Stack

VM WorkerProcesses

WMI Provider

VM Service

Parent Partition Child Partition

 

I’m not going to go nuts with this diagram. If you want to get deeper than I will then go here:

http://tinyurl.com/dn9896.  I’ll just point out a few things. You’ll see that both the parent partition

(the host OS) and the child partition (the virtual machine) run at the same level on the processor. In

fact, the child partition is running at ring 0 just like any operating system does if you install it on a

computer.

For every child partition running on the host there is a small VM Worker process running on the

parent partition.

There are enlightenments to provide optimal driver performance for the virtual machine. There area finite set of enlightenments, i.e. they are only available for certain operating systems that run in

child partitions. These obviously include Windows (I’ll talk more about that later) and they also

currently include SUSE Enterprise Linux.

The child partition has a set of generic Microsoft drivers that are the same no matter what hardware

is hosting the VM. That makes the VM portable across generations and manufacturers of hardware.

The real drivers for the hardware are installed on the parent partition when it is originally installed

(before Hyper-V is enabled). The child’s generic Microsoft drivers interface with the real drivers for

accessing the hardware as follows:

  The VSC’s (Virtual Service Clients) pass the request to the VM Bus.

  The VM Bus runs in the hypervisor at Ring -1. It is critical that is protected against buffer

overflows. Imagine if a VM was successfully attacked and a driver was used to pass “data”

into the VM Bus that would actually overrun a buffer and execute a command at the most

privileged level on the processor. That would compromise the parent and every child on the

host. Microsoft defends against this by forcing you to enable Data Execution Prevention in

the BIOS. Your hypervisor will fail to load (and only load the parent partition) if you haven’t

done this.

  There is 1 VM Bus for every child. It will pass the request to the VSP (Virtual Service

Provider).

  The VSP in turn passes this to the drivers where the hardware is accessed.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 18/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   18

The VM Bus runs only in RAM and is designed for huge amounts of I/O, making it very efficient.

You’ll also notice that the driver request never leaves kernel mode. 

You can run other Microsoft operating systems and other Linux variants with a Xen enabled kernel

on Hyper-V, just without the optimal performance possible with enlightenments ... or the support of 

Microsoft PSS. This process without enlightenments is called emulation. Emulation does not include

a VM Bus. Instead, the child partition thinks it is on a traditional hardware installation. There are

generic hardware (rather than Microsoft) drivers installed. Their requests are trapped, brought to

the parent in kernel mode and sent up into user mode in the where they are translated and passed

back down to kernel mode. This is less efficient than using a VM Bus, mainly because there isn’t just

one transition between kernel and user mode, but many to make emulation possible.

System Requirements

If you are still awake I’ll do my best to fix that for you now. I’m just joking (I think). The system

requirements aren’t all that bad or complicated. 

Licensing

We’ll start with the basics. There are two basic kinds of Hyper-V. There is the free to download

(http://tinyurl.com/63ey36) Hyper-V Server 2008. That one’s easy: just download it and use it. 

The more common one you’ll see in production is the role that is installed on Windows Server 2008

R2. It is available on these SKU’s: 

  Windows Server 2008 Standard Edition x64 with Hyper-V

  Windows Server 2008 Enterprise Edition x64 with Hyper-V

  Windows Server 2008 Data Center Edition x64 with Hyper-V

If you have Software Assurance for Windows Server 2003 then you are entitled to use Hyper-V. If 

you buy Windows Server 2008 without Hyper-V then you are not entitled to run Hyper-V.

If you run Windows Server 2008 with Hyper-V enabled and your VM’s only run Windows Server 2003

then you only need Windows Server 2003 CAL’s. This only changed in January 2009. Prior to this

you needed Windows Server 2008 CAL’s to cover the host. The SPUR for Hyper-V is constantly

changing so I suggest that you run a search to find the latest version.

You also note that Hyper-V will only run on an x64 installation of Windows Server. It will not run on

x86 servers.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 19/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   19

The functionality provided by the different versions is as follows:

Feature Hyper-V Server

2008

Windows Server

2008 Standard

Windows Server

2008 Enterprise

Windows Server

2008 Data Center

Can be managed

by VMM 2008

Y Y Y

Can be managed

by OpsMgr 2007

Y Y Y

Clustered Hosts /

Quick Migration

Y Y

> 32GB RAM in

Host

Y Y

> 4 Host CPU’s Y Y

Add more server

roles

Y Y Y

Free VM

Windows ServerLicenses

1 4 Unlimited

  VMM 2008: System Center Virtual Machine Manager 2008 is an additional purchase from

Microsoft to manage Hyper-V servers and can manage VMware ESX or VMware Virtual

Center servers too.

  OpsMgr 2007: Microsoft System Center Operations Manager 2007 monitors the health and

performance of managed hardware, devices, operating systems, services and applications. It

can integrate with VMM 2008 to manage Hyper-V. This is further expanded by using 3rd

 

party management packs and Pro Tips.  If you are using System Center then you can deploy a System Center Enteprise CAL to the

host and get free CAL licensing for each of the VM’s. 

  Quick Migration: By clustering Hyper-V host servers using Windows Server Failover

Clustering you can make virtual machines a highly available resource. When you fail over a

virtual machine using Quick Migration, it saves it’s running state as of that time, migrates to

the destination host and restarts.

  Don’t add roles to your Hyper-V parent partition. It’s not recommended at all. The very

most I add is the feature for SNMP for hardware management agents.

  Free Guest Licenses: This is a big perk. If you require highly available virtual machines then

buying Enterprise Edition or Data Center edition of Windows Server 2008 can pay for

themselves almost instantly. Buying 1 copy of Windows Server 2008 x64 Enterprise Edition

with Hyper-V gives you an operating system for the parent and 4 free operating systems for

the child partitions. Beware though, that things are very much more complicated if you are

using SPLA licensing, i.e. in the hosting industry.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 20/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   20

Hardware Requirements

We’ve already referred to the hardware requirements but we’ll summarise them here for easy

reference:

 A computer on the Windows Server 2008 hardware compatibility list. Look for the “Certified

for Microsoft Windows Server 2008” logo.

  A 64 bit processor because Hyper-V only works on Windows Server 2008 x64.

  The BIOS must enable you to enable CPU assisted virtualisation on the CPU *. This is the

functionality that gives us Ring -1 for the hypervisor.

  The BIOS must enable you to enable Data Execution Prevention (DEP) on the CPU *. This

protects the hypervisor from buffer overruns in Ring -1.

* Note: Some hardware manufacturers supply computers with processors that support DEP and CPU

assisted virtualisation but they hide them by not providing options to enable them in the BIOS. Check 

with your supplier that you can enable both of these before you make a purchase. This is generally 

not a problem with buying servers from the big 3, e.g. HP, Dell and IBM. But it is a problem if you are

buying a PC or a laptop for lab or demo purposes, e.g. my personal laptop allows me to turn on DEP

but not CPU assisted virtualisation even though my CPU supports both. 

Microsoft isn’t the only company to rename things. Hardware terms such as DEP and CPU assisted

virtualisation are rebranded by the manufacturers. You may have to search through your BIOS

settings and check with your manufacturers to find out what they call them. A good place to look in

the BIOS is the advanced processor settings. Play it safe and make sure you have a default to fall

back to.

You can still install and enable the Hyper-V role on a computer if those two settings aren’t turned onbut the hypervisor will not run. If you do this then you will need to reboot the computer, enable

those f eatures and restart the computer again. I had a recent situation where a server’s

motherboard was replaced by an engineer. It powered up and the hypervisor failed to load. Why?

The BIOS settings are stored on the motherboard and I had the manufacturer’s defaults once again

on the server. 2 reboots later and a quick BIOS edit and it was sorted out.

Note: This brings up an interesting prospect. You can install Hyper-V into virtual machines. So you

can set up a physical lab server and set up VM’s with an iSCSI Hyper-V cluster to do basic experiment 

with.

Here’s some manufacturer’s links to help out: 

  HP: http://tinyurl.com/ae55s6

  Dell: http://tinyurl.com/bougd6

  IBM: http://tinyurl.com/ddh87a 

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 21/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   21

Guest Support 

I’ve talked already about enlightenments. Microsoft provides them in the form of Integration

Components (IC’s) for a finite set of operating systems. These are the OS’s that Microsoft supports

as operating systems that are installed in Hyper-V VM’s. The following have IC’s available for them: 

  Windows Server 2008 SP1 (SP1 is actually the RTM release) with 1, 2 or 4 virtual processors

  Windows Server 2003 SP2 with 1 or 2 virtual processors

  Windows Vista SP1 with 1 or 2 virtual processors

  Windows XP SP3 with 1 virtual processor

  Windows Server 2000 with 1 virtual processor

  SUSE Linux Enterprise Server 10 SP2 and SP1 x86 and x64 with 1 virtual processor

  Windows 7

  Windows Server 2008 R2

The IC’s for the above are an additional installation that is done from the Hyper-V or the VMM 2008

administration consoles. The exceptions are:

  SUSE: These are a free RTM download from http://connect.microsoft.com 

  Windows 7 and Windows Serve 2008 R2 have the IC’s built into the operating system 

Other operating systems can be installed using emulation. For example, a legacy Windows operating

system or a Linux distribution with a Xen enabled kernel can run on Hyper-V as guest operating

systems. You will need to run the legacy network adapter rather than the more efficient VM Bus

network adapter in the VM configuration. I’ve successfully installed RedHat and CentOS on Hyper-V

and I’ve seen screenshots of Windows 3.11 running in Hyper-V.

Sizing Of Hyper-V Hosts

How long is a piece of string? The question you’ll inevitably be asked is what specification should my

Hyper-V hosts be? The answer depends on what you plan to run on them. There’s a common

misconception that you can use less RAM and storage when deploying virtual instead of physical

machines. Unlike VMware ESX, Hyper-V cannot perform RAM oversubscription. This means that if a

physical machine needed 2GB of RAM then it will need 2GB of RAM as a virtual machine. In fact, for

management purposes, it will need slightly more. The same goes for storage; that doesn’t magically

get smaller.

The parent partition will generally need to meet the requirements of Windows Server 2008. Anyhost running a number of VM’s will probably need to reserve 2GB of RAM for the parent partition.

The numbers stack up as follows:

  2,048MB RAM for the parent partition.

  300MB for the hypervisor. More often than not you can include this in the parent’s

2,048MB of RAM.

  Whatever additional RAM is required for drivers or software you install on the hypervisor.

I’ve not had to add anything yet but I also don’t install anything other than hardware

management agents on the parent partition.

  RAM for your VM, e.g. if I add 2 VM’s with 2,048 MB RAM each then I have to have 4,096MBof RAM.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 22/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   22

  Each VM has a maximum overhead charge of 32MB each for it’s first 1GB of RAM. For

example, if I have 10 VM’s then I must allow for 320MB of RAM for managing those VM’s. 

  Each additional GB of RAM per VM has an additional maximum overhead charge of 8MB per

GB of RAM. If I have a 4GB VM then it’s total maximum overhead would be 32MB +

(3*8MB) giving us 56MB. Add that to the 4,096MB of RAM and the VM has a real maximum 

charge of 4,152MB.

That management overhead per VM is listed as maximum. In reality the VM will likely have a smaller

overhead charge. However we have to allow for the maximum in case the VM’s on a host are really

busy and start to use their full assignment. A VM will always take the RAM that is assigned to it, e.g.

the above 4GB VM will fail to start if it cannot acquire 4GB of free RAM from the host machine.

Let’s have a look at an example that strictly follows the above rules. The host machine will run 6

VM’s with 2GB RAM each and 2 VM’s with 4GB RAM each. 

Item RAM Reserve 1st

GB Overhead Additional GBOverhead

Item RequiredRAM

Parent Partition 4096MB 4096MB

Hypervisor 300MB 300MB

2GB RAM VM 2048MB 32MB 8MB 2088MB * 6

= 12528MB

4GB RAM VM 4096 32MB 24MB 4152MB * 2

= 8304MB

Total RAM Required For The Host 25228MB

That host is going to require 25,228MB which we can round up to 25GB of RAM. When I first lookedat this stuff I saw the word “reserve” with the parent partition and wondered if I could actually do

that with Hyper-V using some setting. If you can, I haven’t found it yet. However, you definitely can

accomplish this using VMM 2008 via a policy setting.

This is all very complicated and no one wants to do this by hand because it’s easy to make a mistake.

I’ve created a spreadsheet which I’ve shared (http://tinyurl.com/czo67y) to help out.

The next thing we need to look at is storage. What type of disk should you use? Realistically you

need to use the fastest disk that you can afford. SATA or FATA might be fine for lab or test

environments. For production you will need something faster to minimise disk latency.

What sort of RAID levels will you use? Actually the answer is pretty simple. Use what you normally

would use. If you are using VM that needs fast write access then use RAID1 or RAID 10. RAID 1 or

RAID 10 gives you the fastest write performance so you should use that where necessary. If 

moderate write performance is acceptable then use RAID 5. RAID 1 and RAID 10 are expensive

because you lose 50% of your disk for fault tolerance so we very often try to use RAID5 where

possible.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 23/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   23

How should you lay out your VM’s on your disks? If you are running Hyper-V with Windows Server

2008 Failover Clustering then you have only one choice. Each VM must have its own dedicated LUN

or volume. You should choose the RAID level that is appropriate to that VM. If you are running a

single host then you have a choice. You can put all of the VM’s on a single LUN but that will impact

performance. In a perfect world with unrestricted budgets each VM will exist on a LUN that has a

dedicated RAID controller.

Windows Server 2008 introduces a cluster file system that allows many VM’ s on one volume while

having host fault tolerance. There’ s more on that later.

How much disk will the VM consume? That one is pretty simple to calculate for production

environments:

  Allow disk space for the parent partition. 40GB is the recommended minimum for Windows

Server 2008. Machines with large amount of RAM require huge paging files. If our host has

25GB of physical RAM then do we need some huge C: drive for the parent? Nope. Thereason is that once Hyper-V is up and running we’ll only have 2GB in the parent. 

  All for the size of the virtual disk. If the virtual disk will be 100GB then reserve 100GB.

  You need to allow space for the VM to save its state for quick migration in clusters and

hibernation if the host restarts. If the VM has 4GB of RAM then allow an additional 4GB of 

disk space.

  You should be aware that a full volume is not a healthy volume. Ideally you’ll not let the

volume go beyond 80% capacity. Realistically, you’ll probably go for 90% because server disk

is not cheap and someone has to pay for all that empty disk space you’ll end up having. 

Here’s an example to illustrate the above: 

Item Disk Space

Required

Hibernation

Space

Free Space Factor Total

Parent Partition 60GB 60GB

4GB RAM & 100GB

Disk VM

100GB 4GB 1.1 (100 + 4) * 1.1 =

115GB

2GB RAM & 200Gb

Disk VM

200 2 1.1 (200 + 2) * 1.1 =

223GB

Total Space Required For The Host Machine 398GB

This has shown how to calculate the space required for a production Hyper-V host. There are a few

techniques to oversubscribe your disk space for virtual machines. These techniques are not

supported in production and are thus restricted to test environments. They also make it impossible

to calculate requirements for the host without building up substantial empirical data that will be

unique to your environment.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 24/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   24

I’ve made no mention of snapshots in this section. A snapshot is a view of a VM’s RAM, disk data

and configuration at a point in time. You can build up a collection of these snapshots for a VM and

switch between them. This is great for test environments where you can build a healthy machine,

do some work quickly and flip back to the known healthy state to do some other work. Snapshots

are not supported by MS in production. Microsoft says that you should use normal backup tools

(e.g. Data Protection Manager 2007 SP1) to take a snapshot and store that on your normal backup

medium. A Hyper-V snapshot will consume an unpredictable amount of disk. I’ll talk about their

mechanics later but very quickly, a snapshot creates a new virtual disk where everything that is

stored and/or deleted since the snapshot is stored. That new virtual disk continues to dynamically

grow in size as time goes by. It is stored with the virtual machine on its volume by default. You can

specify a dedicated location for them.

Application Support 

It’s one thing to have support for your guest operating system but it’s another thing entirely to have

support for your applications. There is no one location to check every application. You’re simply

going to have to check with the vendors of your applications to see if they are supported for running

in Windows Server 2008 Hyper-V virtual machines.

Microsoft has set up a page (http://tinyurl.com/abhnv8) to list their applications that are supported.

They’ve been adding support for their products as they’ve been released or updated using service

packs. That’ll likely become a complete listing of their product catalogue thanks to the emergence

of a new feature in Windows Server 2008 R2 called Native VHD (http://tinyurl.com/bv6lfg).

Hyper-V Features

We’re now going to have a look at what Hyper-V is actually able to do. There’s no sales pitch in

here; I don’t work for Microsoft and I don’t have any MSFT shares. 

Storage

Hyper-V supports many kinds of storage from internal disk, fibre-channel SAN to iSCSI. More often

than not you’ll see Microsoft refer to iSCSI in their documentation because of its affordability and

the performance potential of 10GB Ethernet.

You can install one virtual machine per volume or you can install many virtual machines per volume.

You could use 1 volume for many machines in a lab or demo environment. If your budget is

restricted and you have single Hyper-V hosts then this might also be the way to go. I prefer to use

dedicated volumes per virtual machine. I’m in a position where the space consumed is chargeableand must be controlled. I also need this for clustering, which brings up the next subject.

High Availability

Ah, we finally get to the one point that anti-MS people want to gripe about. This seemed to be the

one thing that would wind people up the most.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 25/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   25

VMware, the established name in machine virtualisation, have had a feature called VMotion in ESX

(only with Virtual Centre) for a few years. The idea is that there is all virtual machines are set up on

a single share volume. All ESX hosts in the cluster have simultaneous read/write access to the

volume. Only one host runs each virtual machine, i.e. 1 host has many virtual machines and 1 virtual

machine has 1 host. If there is the need to move a VM from one host to another a process called

Motion performs a cycle of replication the running state (the RAM) of the VM from the source host

to the destination host. Eventually there is so little left that the VM is paused, control is passed to

the destination (with the tiny amount of remaining RAM) and the VM is started up on the

destination server. The entire process takes a little while but the outage is so short that nobody

notices. In my testing with ESX 3.1 and 3.5, a continuous ping might miss 1 packet in 50% of tests.

Network operations continue uninterrupted. This feature is fantastic for dynamic load balancing and

for maintenance of hosts. It also means that you no longer have to cluster application servers for

hardware fault tolerance. Running them as a VM on clustered hosts takes care of this scenario.

Hyper-V in Windows Server 2008 allows for high availability. It uses Windows Failover clustering.

This means that there is no need for a management component purchase to enable high availability

of VM’s. However, this requires Windows Server 2008 Enterprise or Data Center editions to be used

as the hosts. On the face of it this sounds pricey. However, there are some licensing perks that

make that completely write off that expense.

Unfortunately, there is a catch to Microsoft’s Hyper-V high availability. Windows does not yet have

a clustered file system. NTFS only allows 1 host to own and access the volume at one time. This

made Live Migration (this is how Microsoft branded their answer to VMotion) impossible. Microsoft

instead had to treat each VM as a clustered resource. Each VM has a dedicated LUN, i.e. a clustered

storage device. The VM’s files are stored in there and the volume is a dependency for the clustered

VM. When a VM is moved, it saves it state, i.e. hibernates. The LUN is failed over to the destination

host and the VM is restarted from its saved state. The time taken for this process is calculated by

two things:

  The time to fail over the LUN: This is static.

  The time taken to save state and restart the VM: This depends on the amount of RAM in the

VM.

This Quick Migration took around 8 seconds for a VM with 1GB of RAM to fail over in my testing on a

4GB fibre channel SAN. A VM with 4GB RAM took 12 seconds. A VM with 28GB RAM took 70

seconds.

When all this was announced there was uproar from the back seats. I remember one MS blog

having a very interesting and long debate on the issue. I’m not a Microsoft employee or shareholder

so I can safely say the following without getting shot.

If you need Live Migration now then go for VMware ESX and Virtual Centre. They’re excellent

products. But do you really need Live Migration? Will the business be totally upset if a VM takes 12

seconds to move from A to B? A 99.999% (Five Nines) SLA says there will only be 315 seconds of 

downtime per year. 315 – 12 is a lot of spare time! The more common 99.9% SLA give us 31536

seconds of downtime per year. That’s 8.76 hours! For most businesses or deployments, a QuickMigration solution is fine. Still the heckles came. Here’s my questions for the Hyper-V haters:

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 26/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   26

1.  Do you run ESX at all?

2.  Do you run Virtual Center. You have no VMotion without it.

3.  How did you get by without VMotion before VMware got access to the Legato Replistor code

when EMC owned both companies? Did you need 99.999% uptime before then?

But yes, there are a few who really need Five Nines and right now, VMware ESX is probably the

choice for them. However, Windows Server 2008 R2 fixes that and that will probably be released in

much less than 1 year from now.

So in a Hyper-V cluster you must have a single volume for each VM. If you are thinking like I

originally did then this comes to mind: 26 letters in the English language and we might get 22 of 

those to play with. I can have 16 hosts in a Hyper-V cluster. That’s not very many VM’s for 16 hosts!

Ah, we can use letterless or GUID drives instead. Simply choose not to use a drive letter when you

format the drive (before adding to the failover cluster). I highly recommend that you install an

update (http://tinyurl.com/dd9fqo) that allows you to copy the GUID from within the Failover

Clustering MMC in the storage section. My other tips:

  Maintain naming consistency, e.g. a VM is called WINSVR01.

  Create LUN’s and volumes 1 at a time and VM’s 1 at a time. Otherwise it’s easy to get

confused.

  The LUN in the SAN for its first disk is called WINSVR01 Disk1.

  The volume is formatted with the name WINSVR01 Disk1.

  The volume is imported into the Failover Cluster and renamed to WINSVR001 Disk1.

  Create the VM and name it as WINSVR01.

  Install the OS and name the computer WINSVR01.

Doing this and taking your time will be self-documenting.

You should note there is a third party cluster file system option the current release of Windows

Server 2008 and Hyper-V. Sanbolic’s (http://tinyurl.com/bvpjc2) solution allows many VM’s on one

file system while allowing clustered quick migration. I cannot personally vouch for it or its

compatibility with VMM 2008.

Licensing

This is unbelievably complicated. So much so that there are documents scattered over the plains of 

http://www.microsoft.com on the subject and new updates are released every few months. Andsorry, there is no “one document” to answer all of the questions. Your best bet is to use your

favourite search engine to look for phases like “Hyper-V licensing”, “SPUR” and “Product Usage

Rights”. If you’re in the hosting industry then things are even more complicated.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 27/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   27

Here are some highlights:

  You must either run a “Windows Server 2008 with Hyper-V” SKU or Hyper-V Server 2008 to

legally run the virtualisation role. Licenses bought pre-Windows 2008 with Software

Assurance are covered for the “with Hyper-V” feature. 

  There is no license limit on the numbers of VM’s that you run on a host. 

  Windows Server 2008 Standard Edition with Hyper-V will allow you to run 1 free operating

system on a guest VM on that host.

  Windows Server 2008 Enterprise Edition with Hyper-V will allow you to run 4 free operating

systems on guest VM’s on that host. 

  Windows Server 2008 Data Center Edition with Hyper-V will allow you to run unlimited free

operating systems on guest VM’s on that host. 

  If your VM’s run legacy operating systems (Windows Server 2003 or Windows Server 2000)

and your network clients do not directly access services on the Hyper-V server then you do

not need Windows Server 2008 CAL’s. The CAL’s for your VM’s operating systems will befine (a recent change in January 2009).

  When you convert a physical machine to virtual (P2V) and the physical machine has an OEM

license then you need a new license.

There is much more regarding licensing. It’s a huge subject. You’ve got to be aware of how to

license applications in the VM’s, quick migration, etc. Many of these apply equally to VM’s running

in ESX and XenServer. My advice is to call your Local Area Reseller (LAR) for Microsoft product

licensing advice and give them a complete breakdown of what you want to do. Then call the

vendors of your products from other companies and do the same. Check for licensing legalities and

support, e.g. will Oracle’s database product be supported on non-Oracle virtualisation platforms?

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 28/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   28

Maximum Configurations

Item Item Version Amount

Memory Host RAM support Windows Server

2008

Enterprise/Datacenter

Editions

Up to 1TB

Windows Server

2008 Standard

Edition

Up to 32GB

VM RAM Support Up to 64GB (assuming

host availability)

Processors Maximum Logical

Processors

Up to 24 logical

processors (Cores) using

6 Core Processors.

The key is 4 CPU sockets

is the current maximum.

Requires update

http://tinyurl.com/cmppuz 

Only 4 CPU slots

Has been tested with

more but not supportedVM’s Maximum number

of running VM’s

per host

192 (used be 128)

8 per logical processor

Requires update

http://tinyurl.com/cmppuz 

Maximum number

of configured

VM’s 

512

Might have increased

after update

http://tinyurl.com/cmppuz Networking Maximum NIC’s

per VM

12

Maximum

synthetic (*) NIC’s 

8

Maximum

emulated NIC’s 

4

Maximum Virtual

Switches

Unlimited

Maximum VM’s

per Virtual Switch

Unlimited

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 29/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   29

Virtual Storage Maximum VHD

Size

2TB

Maximum

Passthrough Disk 

Size

Same as guest operating

system

IDE Devices perVM

4

SCSI Controllers

per VM

4

Disks per SCSI

Controller

64

Allowing 256 SCSI disks

per VM

Maximum TB per

VM using VHD

512TB

Maximum TV per

VM using

Passthrough Disks

Depends on guest

operating system

Snapshots Not Supported In

Production

Up to 50 per VM

CD/DVD Virtual CD/DVD Up to 3 devices

Passthrough

CD/DVD

Only 1 VM at a time

COM Maximum per VM 2

Virtual Floppy Maximum per VM 1

VM Processors

Each VM is capable of having one or more virtual processors depending on the guest operating

system. The underlying physical cores are shared by all of the VM’s. The hypervisor schedules the

VM’s time on the processor and redirects the operations to the physical processor. 

By default, all VM’s have the right to the same amount of time on the physical processors. We can

alter this:

 Reserve: a VM is always guaranteed a minimum amount of processor time. The VM won’t

use this if it doesn’t need it but the VM will get it even if the host is running at 100%utilisation.

  Relative Weight: VM’s of equal specification will have the same amount of processor time.

You can shift this to certain more important VM’s by prioritising them. 

  Limit: a VM will never be allowed to take more than this amount of time from the processor.

  Processor Numbers: You can double the processor time a VM will get by adding a second

virtual processor. You are not guaranteed that the two threads of execution in the VM will

run on different host processors. You cannot grant access to more virtual processors than

the number of cores in the physical host. You cannot grant access to more virtual processors

than the guest operating system can support normally or in Hyper-V.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 30/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   30

Storage Controllers

Contrary to some commentators and reviewers you have the choice of either IDE or SCSI controllers

in Hyper-V. There is a catch, we must boot up the VM using an IDE disk.

IDE disks are emulated and thus consume slightly more CPU than SCSI disks. Seeing as we have no

choice about booting up from IDE we should consider where we will store our applications and data

in a VM. For a lightweight VM, we probably can get away with a single IDE disk for everything.

For databases, mail server, file servers, etc, you should strongly consider using SCSI virtual disks for

the data volumes. Here’s an example of a database server. 

A typical database server has 3 or many more volumes. The first volume will be the C: drive with the

operating system. The second volume will contain the database file. The third will contain the

database log file. We separate them for disk performance in the physical world and we can do the

same in the virtual world. Disk one is IDE. Disk two will SCSI and disk 3 will be SCSI.

VM Disks

There’s some variety in here that we can avail of in certain circumstances. Some are better for

performance, some consume less space and some are not supported in a production environment.

The first and the best performing of the disks is the Passthrough Disk. This is where we present a

volume from the underlying storage to a VM. There are two advantages:

  It gives near physical disk performance.

  The type does not limit the size of the disk. The only limitation is the VM’s guest operating

system.

There is a catch. You cannot perform a snapshot with Passthrough Disks. However, Passthrough

Disks are they way to go for VM’s that must have the best performance and need to be more than

2TB in size. They are supported by Microsoft in production.

The next is the fixed size disks. It is the second of the two supported types of disk in production. A

fixed sized disk is a virtual hard disk (VHD). Microsoft has published the format of VHD so other

companies can use it. A VHD is a file; nothing more. A VM that uses VHD’s for its disks is simply

using a driver that translates disk operations into file read/writes on the host’s storage system. To

move the VM around you could move its configuration file and its VHD’s. In reality it is not actually

that simple.

A fixed size VHD is easy to set up. You create the disk, pick a location and pick its size, e.g. 40GB. A

40GB file is created in the location you picked. The process takes some time because the disk has to

be overwritten with random garbage. This overwrites anything that may have existed in the file

system beforehand; you wouldn’t want the owner of the new VM to scan its contents to pick up bits

of data that you’d thought you had previously deleted.

Microsoft aimed to get the performance of fixed sized VHD’s to within 10% of the performance of 

the same volumes on the same physical storage. They claim to have gotten to within 2% of that

performance in their testing.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 31/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   31

Dynamically expanding VHD’s are not supported in production. You specify a maximum size of the

volume. A small VHD file is created and this grows as required. There is obviously going to be an

overhead as the VHD file on the host’s storage is grown in conjunction with writes within the virtual

file system. This type of disk is perfect in labs and demonstration environments because it consumes

just the storage it needs from the underlying host.

Differencing disks are the last of the disk types in Hyper-V and are also not supported in production.

A differencing disk is created and targets a source disk. For all intents and purposes, the differencing

disk has the same contents as the source disk as far as the VM is concerned. Every time the VM

reads old content from the differencing disk it is really reading it from the source disk. Nothing can

be written to the source disk so every piece of new data or modified data is written to the

differencing disk. The differencing disk will slowly grow. This is the slowest of the VHD types

because of the overhead.

Here’s an example of how you might use it in a lab. Create a clean build VM that has a dynamically

expanding VHD with Windows Server 2008 and install the software that you would install on all such

servers. Use SYSPREP to flatten the VM. Export this VM from Hyper-V and keep it safe. Now create

a number of VM’s that have 1 differencing disk each. Each of these differencing disks will target the

single VHD from your SYSPREP’ed VM. Boot up these new VM’s. They all are clones of the original

VM. The differencing disks will reference the contents of the original source disks and only grow as

new or modified data is created. The VM’s will boot up and be independent machines, allowing you

to rename them and add them to the network. You’ve built many VM’s with minimal effort and

minimal usage of disk. This is exactly how I created my Hyper-V lab and quality control environment

at work.

Backups

There are 3 options for backing up a VM and we’re going to look at each of these now. Some are

supported in production and some are not.

A snapshot is when we capture the running state (RAM) and disks of a virtual machine. We can take

up to 50 snapshots of a VM and jump between them. For example, I can create a VM with a clean

installation of Windows Server 2003. I take snapshot number 1. I now install IIS6 and set up a web

site. I take snapshot number 2. I now upgrade the OS to Windows Server 2008. I can test the VM

and see if everything is working OK. If not, I can return to snapshot 2. I’ve now lost everything I did

since then, i.e. the upgrade to Windows Server 2008. I can repeat the upgrade and make sure the

process is documented. Satisfied, I can return to snapshot 1 and repeat the entire process with SQL

2005 without rebuilding the VM from scratch. You can see how powerful this is for lab, testing and

demo work.

This process works using differencing disks and hence is not supported in production. When we take

a snapshot a new differencing disk is created. The VM now uses the differencing disk. The

differencing disk points to the last disk that the VM used.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 32/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   32

The only supported mechanism for taking snapshots in production is to use a backup application that

can avail of the Hyper-V VSS writer, e.g. Microsoft’s System Center Data Protection Manager with

Service Pack 1. The Volume Shadow Copy Service allows the backup of the VHD and state as the

machine runs. It backs up everything that is not changing until there is just a tiny bit left. VSS

freezes the VMM for an unperceivable instant to capture the remaining data. This is the only way to

back up that VM and maintain its integrity.

By the way, backing up domain controllers using snapshots is not supported. There is a danger of 

USN rollback if you recover those VM’s. This is bad! Have a read of Microsoft’s support policy for

virtual domain controllers at http://tinyurl.com/clf686. Basically, you should treat them like physical

machines.

The final backup solution is to actually back up your VM’s as if they were physical machines. This is

the only way to get granular backups of files and data. Check with your vendor for support of 

running an agent in a Hyper-V VM and then install and configure the agent as if it was running in a

physical server.

For disaster recovery (DR), there are some 3rd

parties who are offering file replication mechanisms

that can be installed on the parent partition. They will then replicate the VM’s to a server in a DR

site. Alternatively, you can use normal DR replication/clustering solutions within the VM’s as if they

were physical machines.

Networking

The host server should have a minimum of 2 NIC’s. One will be for the parent partition and the

other will be for VM networking. If you cluster the host then you should have a minimum of 3 NIC’s

with the 3rd

being used for the failover cluster heartbeat.

Currently, there is no support for physical NIC teaming on Hyper-V servers. According to people Italked with at TechEd EMEA 2008, Microsoft is working with hardware partners to come up with an

industry solution and this should be announced to the public fairly soon. Until then do not install or

configure NIC teaming software on your host. My experience has shown that it can cause later

problems, e.g. an inability to manage the host using VMM 2008 or an inability to configure VM VLAN

(Virtual LAN, a form of subnet with an identity and restricted to specified switch ports for controlling

broadcast domains and access) tagging.

When there is a NIC teaming solution then you could have 4 NIC’s for a minimal solution. 2 NIC’s

would create a teamed NIC for the parent partition and 2 NIC’s would create a teamed NIC for the

VM’s. This would logically extend to 6 NIC’s in the host for a failover cluster solution. 

It goes without saying that these should be at least 1GB NIC’s.  WiFi NIC’s are not supported in

Hyper-V. However there is a workaround for your laptop demo environment:

http://tinyurl.com/5p9yq8.

VM’s can exist on your physical network. They can even exist on the same VLAN as physical servers.

As such, VM’s are subject to the same firewall and access rules as physical servers.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 33/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   33

There are 3 types of network:

  External: This allows VM’s to communicate with each other, the host and the physical

network, depending on VLAN’s and network/firewall rules. 

  Internal: VM’s can talk to the host and other VM’s on the same network. 

  Private: VM’s can only talk to other VM’s on the same network. 

External Networks are bound to a physical NIC on the host. You can only bind one network to a NIC.

A VM can be placed on a specific VLAN by using a VLAN tag or ID. You must work with your network

administrators to trunk  the required VLAN’s on the switch port that is connected to the NIC(s) for

your external network(s). Get the VLAN tags for the VLAN’s from the network admin. Now you can

do one of two things:

  Tag the external network: This allows all VM’s on this external network gain access to that

VLAN. However it is expensive because it consumes the mapped NIC on the host – 

remember only one external network per NIC! You would need 6 NIC’s to create 6 external

networks on different VLAN’s. It is expensive but it’s a quicker and simpler configuration to

manage which would be suited to simpler networks.

  Tag the VM’s NIC: Instead of tagging the external network and consuming NIC’s we can bind

the NIC of the VM to the VLAN of choice. This allows many VM’s to access many VLAN’s on

a single NIC. This is more complex to manage but ideal where there are many VLAN’s. 

There are two types of NIC we can use in a VM:

  The legacy NIC: It is used in legacy operating systems that do not have integration

components. They also support PXE.

  Synthetic NIC: This can only be used once integration components are installed. Even if your

OS supports those integration components, the NIC will not be configurable until those IC’s

are installed. PXE is not possible with Synthetic NIC’s. 

CD/DVD

You can present an ISO file to the VM so that it can install an OS from it, install software from it or

copy data from it.

Overhead of Virtualised Resources

I’ve talked about memory overhead when we sized the host. Microsoft has detailed the results of their overhead and performance tests here: http://tinyurl.com/dn9896 

The Host OS Installation Type

Windows Server 2008 introduced the Core installation type and the Full Installation type. If you buy

Windows Server 2008 then you have the choice of either installation type. You cannot do an “in

place” upgrade from one to the other. 

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 34/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   34

A Full Installation is the one we are used to with all the bells and whistles. A Core Installation strips

out the GUI and some other stuff. It reduces the attack surface, the number of applicable patches,

RAM requirements and the amount of disk consumed by the OS. By stripping away the GUI you

need to get used to scripting and command prompt. But once a machine is up and running you can

manage everything from a GUI, i.e. the remote administration tools on your desktop.

Microsoft urged us to use Core whenever possible and especially for Hyper-V. In fact, Hyper-V

Server 2008 R2 is a customised and stripped down version of Core that can only run Hyper-V and has

a basic DOS-styled “GUI” wizard for configuration. 

I had originally planned to use Core for our Hyper-V hosts. Getting Windows configured didn’t take

too long. The necessary commands are well documented on the net. In fact you can even get a tool

called Core Configurator (http://tinyurl.com/bweuzr) to give you a GUI on a Core Installation. It

translates everything into commands in the background for you.

But everything came crashing to a halt. I install the hardware management agents for our servers sothat I can completely manage them using Operations Manager 2007. This way I know if there is a

hardware issue as soon as it happens and can call support to have an engineer out within 4 hours.

Who really wants the alternative, e.g. you find out that the server has failed when users or

customers ring up? Unfortunately the hardware manufactures haven’t caught up with the idea of 

the Core Installation. I did some searching and found mention of obscure scripting languages. In the

end, I went with Full Installations for everything. I also had to think about junior engineers. I can’t

do everything and I need other people in the company to be able to work. I can’t expect them to

know advanced command prompt procedures.

In the end, this proved a wise move because working with letterless (GUID) drives does sometimesrequire working locally and using copy/paste in Windows Explorer.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 35/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   35

Deploying Operating SystemsTo be honest, there is really nothing different here. You have a few choices:

  Build the VM’s by hand: This is fine for one or two VM’s. 

  Cloning using SYSPREP: Use SYSPREP and export the VM. Create new VM’s without disks.

Copy the template VHD to the new VM’s and configure them to boot with those copied

disks.

  Use automated building tools: You can use Configuration Manager 2007, WDS, BDD, Ghost,

etc for building your VM’s. Just remember that you’ll probably have to use the Legacy NIC.

It emulates a multiport DEC 21140 10/100TX 100 MB NIC.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 36/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   36

FUDThere is plenty of fear, uncertainty and doubt regarding Hyper-V and most of it is unfounded. Here’s

some examples. I’m going to be balanced with this. 

FUD Truth

Hyper-V is just Virtual Server Untrue. Hyper-V is a true hypervisor that runs at

Ring -1 on the processor

Hyper-V is not an enterprise product Untrue. My experience is that it is very stable

and offers excellent performance. The

manageability is unmatched.

Hyper-V can only have Windows Guests Microsoft also supports SUSE Linux. There are

countless examples on the Internet of people

running Xen enable Linux distributions.

You cannot have SCSI disks in Hyper-V You can but you cannot boot from them. Virtual

SCSI disks are more efficient but the real key is

the underlying physical storage, e.g. 5400RPM or

15,000RPM, IDE or SCSI?

ESXi is just like Hyper-V Completely untrue. ESXi does not have

clustering. ESXi has a web interface for

management, not a parent partition. Where

exactly do you install a management agent on an

ESXi machine? You cannot do it.

ESX features more management False. This usually comes from those who have

no idea that Virtual Center is an additional

purchase. In fact, when comparing like-with-like,

Hyper-V and VMM 2008 offers much more

management including the ability to manage ESX

and many Virtual Center clusters.

There’s no VMotion so it’s not enterprise ready Just how did we ever get by without VMotion,

just a few years ago? 99.999% is truly only

needed by a tiny percentage of servers.

Disk management is complex This is true. You need to stay on top of things

and use consistent naming standards.

There are a limited set of supported operating

systems.

This is again true when compared to the

competition.

I saw a video where a VM went offline for ages

after quick migration.

The cluster was incorrectly configured and it was

beta release. The need for that configuration

was removed before the product was released.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 37/38

 

Copyright Aidan Finn 2009 http://joeelway.spaces.live.com/   37

The FutureThe future of Hyper-V is coming to us pretty soon. Windows Server 2008 R2 will be out either late

2009 or early 2010 and will include a new version of Hyper-V. You can download pre-release test

versions of the operating systems now to try it out.

  The big news is that Live Migration will be present thanks to a new cluster file system. It is

already being shown in demonstrations.

  The cluster file system allows many VM’s to be installed on one volume, much like VMFS in

ESX. This greatly simplifies disk management.

  Network traffic from the physical network to the VM is being optimised thanks to

partnerships with the hardware community.

  There will be greater power savings on idle hosts with “Core Parking”, i.e. idle Cores (and

then CPU’s) will hibernate. 

  CPU overhead for VM memory management is being optimised using new technology from

AMD and Intel.

  Native VHD allows a VHD to be coped to a dedicated Windows Server 2008 R2 host. This

host can then be booted up using the VHD.

  It will be possible to “hot add” storage. 

  There will be support for 32 logical processors.

8/7/2019 Hyper-V 1

http://slidepdf.com/reader/full/hyper-v-1 38/38

 

SummaryI hope I’ve answered a lot of questions for anyone new to Hyper-V. I’ve talked about what the

product really is, gone through the architecture, discussed some of the features and cleared up

some of the commentary that’s out there on the net. 

I hopefully will get to write some more, deep technical documents pretty soon. The next one that I

want to write will cover installing a Windows 2008 Server with Hyper-V and setting up a Hyper-V

Server 2008 machine.

With any luck this document has given you the interest to download either the free product or

download and evaluation copy and try it out.