62
1 Lab: Nexus 1000v with UCS B-series Servers Discussion: Traditional VMware Networking and the DVS Nexus 1000v is a Distributed Virtual Switch software component for ESX and ESXi 4.0 and above. Traditional "old style" ESX networking defines a separately configured virtual switch (vSwitch) in each ESX server. These switches have no coordinated management whatsoever, lead to a great confusion about who is configuring virtual switch networking (the server administrator or the network administrator?), and are generally just frustrating and unworkable across a large number of ESX servers. Even supporting virtual machine migration (vMotion) across two ESX servers using the traditional vSwitch is a "pain". The Distributed Virtual Switch (DVS) model presents an abstraction of having a single virtual switch across multiple ESX servers. While each pod in this lab has access to only two instances of ESX, the DVS supports up to 64 ESX servers. The conceptual picture of the DVS abstraction looks like the diagram below. Note that the "DVS concept" implies that the DVS is managed from some "central location". VMware provides their own DVS where this point of management is the vCenter server.

Nexus 1KV Operating in UCS environment

Embed Size (px)

DESCRIPTION

Providing integration options and design recommendation for Cisco Nexus 1KV with Cisco UCS

Citation preview

Page 1: Nexus 1KV Operating in UCS environment

1

Lab: Nexus 1000v with UCS B-series Servers

Discussion:

Traditional VMware Networking and the DVS

Nexus 1000v is a Distributed Virtual Switch software component for ESX and ESXi 4.0 and

above.

Traditional "old style" ESX networking defines a separately configured virtual switch (vSwitch)

in each ESX server. These switches have no coordinated management whatsoever, lead to a

great confusion about who is configuring virtual switch networking (the server administrator or

the network administrator?), and are generally just frustrating and unworkable across a large

number of ESX servers. Even supporting virtual machine migration (vMotion) across two ESX

servers using the traditional vSwitch is a "pain".

The Distributed Virtual Switch (DVS) model presents an abstraction of having a single virtual

switch across multiple ESX servers. While each pod in this lab has access to only two instances

of ESX, the DVS supports up to 64 ESX servers.

The conceptual picture of the DVS abstraction looks like the diagram below. Note that the "DVS

concept" implies that the DVS is managed from some "central location". VMware provides

their own DVS where this point of management is the vCenter server.

Page 2: Nexus 1KV Operating in UCS environment

2

The Nexus 1000v DVS

Nexus 1000v is a DVS implementation. The basic Nexus 1000v abstraction is that it provides a

central supervisor (the VSM, or virtual supervisor module) to be the central point of control of a

single switch across up to 64 ESX servers. This central supervisor is a virtual machine that runs

NxOS, and its functionality greatly resembles that of the Nexus 5000 series.

The part of the Nexus1000v software that is actually installed in ESX is called VEM (virtual

Ethernet module), which acts as a "virtual line card" for the supervisor. Like a line card, once

the VEM is installed, there is absolutely no administration at the VEM level., The entire

Nexus 1000v acts as a single unified switch.

In the Nexus 1000v model all network configuration is done at the VSM. The vCenter server

reads configuration of ports and port profiles provided by the VSM and is where virtual

machines are "virtually wired" to the Nexus 1000v:

There is only one active VSM at a time. It is optional to configure VSM in HA-mode whereby a

secondary VSM will take over for the primary (including taking over the IP address with which

you access it) if the primary fails. The picture looks like this:

Page 3: Nexus 1KV Operating in UCS environment

3

VSM-VEM Communication

The VSM communicates with each VEM in one of two fashions:

Layer 2 Communication

VSM and VEM communicate via two special VLANs (actual VLAN numbers are assigned at

Nexus 1000v installation time):

1. Control VLAN – all Nexus 1000v-specific configuration and status pass between VSM

and VEM on this VLAN --- this is a low level protocol that does not use IP

2. Packet VLAN – certain Cisco switching protocol information, such as CDP (Cisco

Discovery Protocol), pass on this VLAN – this does not use IP.

Layer 3 Communication

VSM and VEM communicate using TCP/IP (there can be routers between VSM and VEM).

There are no control and packet VLANs in this scenario.

VSM virtual network adapters

The VSM virtual machine itself always is defined with 3 virtual network interfaces – they are

used and connected in this particular order!

1. Control interface – connected to control VLAN (unused if doing L3 control)

2. Management interface --- "regular interface" with IP you access via ssh to log in to the

VSM (and in L3 control used by VSM to control VEM)

3. Packet interface – connected to packet VLAN (unused if doing L3 control)

Note in the L3 control scenario, the VSM still has to have three virtual network adapters, but the

first and third one aren't used for anything (and can remain "dangling in the virtual ether").

Page 4: Nexus 1KV Operating in UCS environment

4

Putting VSM inside the same VEM it is Controlling

The VSM is often a virtual machine outside the scope of the VEM's that it is controlling.

However, it is possible to run the VSM as a virtual machine attached to the same VEM that it is

controlling. This seems a little "inside out" but it is a supported configuration. In this case you

almost certainly want primary and secondary VSM's and have them as virtual machines attached

to different VEM's (although either or both may be able to migrate using vMotion across

VEM's). Note how in the picture the VSM VM's have 3 virtual adapters connected to 3 different

veth ports (for control, management, and packet in that order).

Page 5: Nexus 1KV Operating in UCS environment

5

Nexus 1000v and VM Migration (vMotion)

One of the central goals of any DVS is to support vMotion in a more natural fashion. vMotion is

especially painful in the "traditional vSwitch" world because it doesn't work unless the separate

vSwitches on the source and destination ESX servers are configured identically (down to the last

capital letter). This becomes particularly painful with many ESX servers, with the vSwitch

providing precisely no coordinated configuration. A DVS presents an abstraction of one single

switch across multiple ESX servers and guarantees that there can be no virtual machine

networking inconsistencies that would prevent the migration.

In the Nexus 1000v, the port down from the DVS that connects to a VM (a "veth" port) is an

abstraction which naturally supports vMotion. When a VM migrates from one ESX to another,

the "veth" identity of that VM does not change whatsoever; the abstraction is that the VM is still

connected to the exact same virtual port both before and after vMotion. Therefore you are

guaranteed that the virtual port configuration is identical before and after vMotion. From the

point of view of the DVS, "nothing has changed" across a vMotion:

Page 6: Nexus 1KV Operating in UCS environment

6

Nexus 1000v Features and Benefits

Nexus 1000v provides:

Single point of management for VM networking across up to 64 ESX (like any

DVS)

Familiar NXOS interface

Choice of layer 2 or layer 3 control between VSM and VEMs

Just about any Layer 2 feature you can think of, extended down to the VM (these

are just some examples that we will do in the lab):

o IP filters

o Private VLANs (isolating VM's completely from each other, or in groups).

o SPAN / ERSPAN (Switch packet analysis – basically packet sniffing)

Nexus 1000v on UCS B-series Servers with Cisco Virtual NIC

The Cisco Virtual NIC provides a lot of flexibility for configuration of the Nexus 1000v in that

you can "manufacture" virtual hardware that will be visible to the ESX as if they were physical

NICs (i.e. uplinks from the VEM).

The main strategy is to always manufacture these vNICs in pairs. Configure one member of

every pair to connect to fabric A and the other to fabric B (without A-B failover). Redudancy

between the pair will be provided by the Nexus 1000v's northbound load-balancing features

(called mac pinning), which allows active/active uplinks without any special configuration on

the northbound switch(es).

Here are alternate ways of adhering to the vNICs in pairs principle:

1. One pair of vNICs that carry all your required VLANs upward

2. Two pairs of vNICs (the one we do in the lab, to make it a "little complicated")

a. VM actual data traffic

b. everything else (Control, Packet, Management, vMotion traffic)

3. Three pairs of vNICs:

a. VM actual data traffic

b. vMotion, management

c. Control and Packet traffic

4. X pairs of vNICs:

a. VM actual data traffic for one VLAN

b. VM actual data traffic for a different VLAN

c. vMotion for some VM's (on one vMotion VLAN)

d. vMotion for other VM's (on a different vMotion VLAN)

e. .. you get the picture

Page 7: Nexus 1KV Operating in UCS environment

7

The Final Entire Picture: What you will See in the Lab – UCS and N1K

The following screen shot shows the intergrated configuration corresponding to "choice number

2" in the section above (so that we can show more than one pair, but not go too crazy in the lab):

Page 8: Nexus 1KV Operating in UCS environment

8

vNIC Configuration from the UCS Point of View

The following screen shot shows the UCS configuration corresponding to "choice number 2" in

the previous section (pair of vNICs for uplinks for VM data, pair of vNICs for everything else).

In addition, you see one more vNIC (the first one), used for getting the whole thing installed.

Since we will be running VSM as a virtual machine inside these same ESX servers, we have to

start out with VSM on the regular "old vSwitch" and run all the applicable VLANs (control,

packet, etc) down to this vSwitch. Later, we can "abandon" this vNIC when all operations are

switched over to N1K

pair of vNICs for N1K

uplink (VM data only)

pair of vNICs for N1K

uplink (everything else)

vNIC for "bootstrapping"

(kickstart ESxi, running

everything through "old

vSwitch").

Page 9: Nexus 1KV Operating in UCS environment

9

Lab Topology

You will have the ability to view the entire configuration of the UCS, including shared fabric

interconnects, chassis, and blade servers that are both dedicated to your pod as well as all others.

You will not have direct access to any of the shared northbound infrastructure (N7K’s, MDS

switches, and EMC Clariion storage arrays). These are preconfigured so that they will work

correctly with you service profiles.

Your own pod consists of two server blades on which you will be running ESXi 5.0. You will be

performing all Nexus 1000v installation and configuration as part of the lab.

You will be installing your own VSM(s) as virtual machine(s) in the same ESXi servers.

Page 10: Nexus 1KV Operating in UCS environment

10

Accessing the Lab

Identify your pod number. This is a one-digit or two-digit number as shown. Later in this lab

this pod number is referred to as X (always in italics. Please do not literally substitute an X; that

would be what is referred to as an “I, state your name” problem.)

Connect to the remote student desktop. You will be doing all of your work on this remote

desktop. In order to connect, left click on the Student Desktop icon on your lab web page and

select “RDP Client”, as shown:

An RDP configuration file is downloaded. Open it using your native RDP client (on Windows

clients you can just select “Open with” on the download dialogue, and you should not even have

to Browse for an application.)

Remote Desktop User: administrator

Remote Desktop Password: C!sc0123

Left-click

here

Page 11: Nexus 1KV Operating in UCS environment

11

Do all your work on the remote desktop. The following are the UCS Manager addresses that you

will be accessing. Almost all work is usually done with the first (primary) address listed, which

connects you to whichever UCS fabric-interconnect happens to have the primary management

role.

Device Access Username Password

UCS1

(primary address)

10.2.8.4 USER-UCS-X

C1sco1234

UCS1-A 10.2.8.2 USER-UCS-X

C1sco1234

UCS1-B 10.2.8.3 USER-UCS-X

C1sco1234

There is an organization created for all of your logical configurations ORG-UCS-X, where X is

your pod number. A template for an ESXi server with the correct storage and network

corrections (as discussed earlier in this document) is preconfigured for you in your organization.

You will be creating service profiles from this template, watching an automated install of ESXi

(or doing the install by hand, if you really prefer), and proceeding with the Nexus 1000v

configuration.

Page 12: Nexus 1KV Operating in UCS environment

12

Task 1: Access the UCS Manager GUI

1. Access the student remote desktop using the access information in the previous section

(run the rest of the lab in the remote desktop; you will not be able to access the 10.2.X.X

addresses directly from your own local desktop)

2. Access the UCS Manager GUI launcher web-page by navigating to the primary UCS

address listed in the table above (10.2.8.4):

3. Click the Launch UCS Manager button to launch the GUI.

4. Log in using the credentials listed in the table in the previous section. Remember X

refers to the pod number (1 or 2 digits) you see on the labops web page. Use your own X,

not the “25” shown in the example:

Page 13: Nexus 1KV Operating in UCS environment

13

Task 2: Examine the VLANs Used in this Lab (Shared by All

Pods)

VLANs in UCS can not be configured separately for each organization. It is possible for us to

create 24 different VLANs for VM data, 24 more for vMotion, etc, it simplifies everything in

this lab that we will all be using the same collection of VLANs.

1. Navigate in UCS Manager on the Navigation Pane to LAN tab -> LAN CLOUD ->

VLANs as shown and expand the VLANs (note these are not specifically under the Fabric

A or Fabric B sessions ---- 99.99% of the time VLANs are configured across both fabrics

(not doing so would seem to be missing the point about how A-B redundancy works in

UCSV).

2. Note the VLANs that are used in this lab --- you might want to just write them down

because there are parts of the configuration where you will have to refer to them by

number. Since we are all using the same VLANs, later tasks in the lab will remind you of

the numbers when you need them:

Mgmt-Kickstart (198) will be used for management connection to ESXi itself (for

yourself and for VCenter to access ESXi) and for management connection (via ssh) to the

VSM. It will not be used for anything else.

The other VLANs are used as discussed in the introductory section to this lab. The

default(1) VLAN is not used at all in this lab, but there is no way to remove it from UCS

(following the operational theory of all Cisco switching products). Many enterprises

specifically do not ever use VLAN 1 for anything .

Page 14: Nexus 1KV Operating in UCS environment

14

Task 3: Examine the Prefab ESX Service Profile Template Created for your Organization

1. In UCS Manager, On the Servers tab, navigate to: Service Profile TemplatesrootSub-Organizationsyourorg

2. Highlight (single click) the template name ESXTemplate . The example shows the

org. for pod number 25. Please look at the one for your own org:

3. In the Content Pane, click on the Storage tab. You should be looking at something

like:

Pay no attention to the "desired order" column for the vHBA's. The important part is that

you have have two vHBA's for SAN access on the separate A and B VSANs. Note that

your node WWN and port WWNs will be chosen from a pool when you create service

profiles from the template in the next task.

4. Now click on the Network tab. You should see 5 vNICs configured just as discussed in

the discussion introduction to this lab. To review, the strategy is:

a. First vNIC is for "bootstrapping" ---- will be attached to the original vSwitch so

that N1K can be installed and all networking can be migrated to the N1K DVS.

Page 15: Nexus 1KV Operating in UCS environment

15

b. Next pair of vNICs, on fabrics A and B (vmnic1 and vmnic2) for N1K Uplink

(VM data VLAN only)

c. Last pair of vNICs on fabrics A and B (vmnic 3 and vmnic 4) for N1K Uplink (all

other VLANs besides VM data).

5. Now click on the Boot Order tab. Note that service profiles derived from this

template will try to boot off the hard disk first. When you first create and associate your

Service Profiles the hard disks will be scrubbed, so the boot will fall through to the first

network adapter and your blade server will PXE-boot into a kickstart-install of ESXi.

Task 4: Create two Service Profiles from Your Template

Note immediately upon creation of your service profile, your blade servers should be associated

and start to configure themselves (because the template points to your pool). It does not matter

which ID from each identity pool (MAC, WWPN, etc) is chosen by each service profile, as long

as they are unique and virtualized (which they should be).

1. Make sure the name of your template ESXTemplate is hightlighted as before (if it is

not, go back on the nav pane as at the beginning of the previous task and highlight it.

2. Right ClickCreate Service Profiles From Template

3. On the popup, use naming prefix ESX and keep the number 2. Click OK

4. Your service profiles should be created, 5. Navigate to:

Service Profiles rootSub-Organizationsyourorg

and highlight the name of each of your service profiles. On the General content pain you

should see “Overall Status: config” and the chosen blade server.

6. Invoke the KVM Console from the General Tab of each of your 2 service profiles

7. Your blades should complete configuration with UCS Utility OS , automatically reboot,

and automatically PXE-boot into an ESXi automated install. This takes about 10 minutes

to complete. Have a nice beverage … Keep your KVM console open because…

8. Once your ESXi instance have rebooted off the fresh hard-disk installation, write down

the IP addresses of each one (they were obtained through DHCP). Later tasks may refer

to your "first ESX server" and "second ESX server" --- use the lower IP address as the

"first" since they will present themselves in that order in the vSphere GUI anyway.

9. Close your two ESX KVM consoles. You should never need them again.

Page 16: Nexus 1KV Operating in UCS environment

16

Task 5: Map Storage using WWNNs and WWPNs for Your

ESX Servers.

Best practice on UCS has you create templates specifying pools WWNNs and WWPNs , but you

do not know what combination of the two you will get from the pools until you create a service

profile from the template.

The Clariion storage we are using in the lab requires that you know a specific combination of

WWNN and WWPN. In order to properly map storage, we have provided a little web page that

simulates your communication with the storage administrator.

1. For each service profile, highlight the service profile in the nav pane, and click the

Storage tab in the content pane. You will get information about the WWNN for that

service profile and the two WWPNs: this shows exactly what you are looking for. Recall

that fc0 is on fabric A and fc1 is on fabric B.

2. Go to the following little web page

http://10.0.4.253/cgi-bin/storagemapcontent.pl

3. You will be submitting the form a total of 4 times, twice for each service profile.

Page 17: Nexus 1KV Operating in UCS environment

17

4. For each service profile, choose the correct WWNN and WWPN combinations. Make

sure you submit the correct WWPN for fabric A and then the correct WWPN for fabric

B.

From the checkboxes at the bottom, we need only prefab VM LUN 2. (the others are

either for other labs or don't really exist at all. We are not booting from a SAN LUN).

Page 18: Nexus 1KV Operating in UCS environment

18

Task 6 : Use vCenter and build a vCenter Data Center with

your two ESX hosts and Shared Storage

1. Invoke vSphere client and point it to the IP of vCenter – it is running on the same remote

desktop on which you are doing all your work. vSphere client should default to

‘localhost’ for IP address/Name and the option ‘Use Windows session credentials’ should

be checked. Alternatively you can use IP 127.0.0.1 and your same remote desktop

credentials (for user Administrator) 2. Ignore certificate warnings.

3. Highlight the name of vCenter, right click and create a new DataCenter. Name it MyDCX,

where X is your pod number.

4. Highlight MyDCX, right-click and select Add Host... On the wizard:

a. Host: enter the IP of the first ESX. Enter the user name and password

(root/cangetin) and click Next

b. Accept the ssh key (click yes) c. Confirm the information and click Next d. Keep it in Evaluation mode and click Next e. Keep Lockdown Mode unchecked and click Next f. Highlight the MyDCX and click Next

g. Review the information and click Finish

Your ESX host will be added to vCenter. Once the vCenter agent is automatically loaded

on the ESX host, your ESX host will be connected. If your ESX host still shows a red

warning, that is OK (may be just warning about the amount of RAM), as long as it is

connected.

5. Repeat step 4 and add your second ESX server 6. Highlight your first ESX server. 7. Click the Configuration tab.

8. Click Storage Adapters (inside the Hardware box on the left)

9. Click the Rescan All… blue link way near the upper right

10. On the popup, uncheck Scan for New VMFS Volumes (leave only Scan for

New Storage Devices checked). Click OK.

11. Click Storage (inside the Hardware box on the left)

12. You should see the existing Storage1 datastore, which is the boot disk

13. Click Add Storage just above the list of existing datastores. In the wizard:

a. Choose Disk/LUN, and click next (it may scan for a while) b. Highlight your 60GB LUN #2 (not a larger local disk, or LUN 1) and click Next c. Select “Assign a new signature” and click Next d. Review (nothing to choose) and click Next, then click Finish

14. Your new datastore should appear in the list; you may have to wait up to 15 seconds or so

(it will be named snap-xxxx) 15. On the left pane, highlight your second ESX server

16. Click Storage Adapters (inside the Hardware box on the left)

17. Click the Rescan All… blue link way near the upper right

Page 19: Nexus 1KV Operating in UCS environment

19

18. On the popup, uncheck Scan for New VMFS Volumes (leave only Scan for

New Storage Devices checked). Click OK.

19. Click the Configuration tab.

20. Click Storage (inside the Hardware box on the left)

21. You should see the existing Storage1 datastore, which is the boot disk

22. If you do not yet see the shared storage (snap-xxx), click Refresh near the word

Datastores near the upper right.

23. Your new datastore (snap-xxx) should appear in the list

Page 20: Nexus 1KV Operating in UCS environment

20

Task 7 : Configure Networking on the "Old vSwitch"

In order to get N1K launched, we will have to get VSM running under the "old vSwitch" first, so

that we have to get its VLANs running through the old vSwitch. It's a bit of a rigmarole to

configure these VLANs once on the old switch and then again an the N1K itself, but there is no

other way to do this lab:

1. In your vSphere client, highlight one of the ESX servers.

2. Click the Configuration tab

3. Click Networking in the Hardware box

4. You should see a picture of the "old virtual switch" vSwitch0 on the right.

Note that the "old vSwitch" uses the the term "No VLAN (or "VLAN 0") for whichever

VLAN happens to be flowing native on the Uplink. The "old vSwitch" doesn't know

what this is, you just have to know from looking northbound (recall it is 198, the one we

will use for mgmt).

5. Click on Properties.. to the right of the vSwitch0 (the lower one). If you see the

popup that mentions IPv6, that is the wrong one --- go back and look for the other Properties..

6. Highlight the VM Network (Virtual Machine Port Group) and click

Edit.. at the bottom. At this point you should be here:

Page 21: Nexus 1KV Operating in UCS environment

21

7. Modify the VLAN ID (Optional) field and make it 200 (which as you recall is the

VLAN over which we will run VM data).

8. Click OK (you should see the VLAN ID change on the Port Group Properties on the right

side).

9. Click Add.. on the lower left

10. Choose Virtual Machine radio button and click Next

11. Enter under "Network Label" OLDSW-CONTROL and "VLAN ID" 930. Click "Next" and

then "Finish"

12. Repeat steps 9-11 (choosing "Virtual Machine") each time with the following two new

network labels (port groups):

a. "Network Label" OLDSW-MGMT with "VLAN ID" None (0)

b. "Network Label" OLDSW-PACKET with "VLAN ID" 931

13. Close the "vSwitch0 Properties window"

14. Click Add.. on the lower left one more time

15. This time choose the VMKernel radio button and click Next..

16. Use "Network Label" OLDSW-VMOTION and "VLAN ID" 932, and check ONLY the

Use this port group for vMotion checkbox. Click Next.

17. Use Use the following settings radio button with:

a. IP address 192.168.X.1 (where X is your pod number)

[ when we repeat this whole task for second ESX, use 192.168.X.2 ]

b. Netmask 255.255.255.0

18. Click "Next" and then "Finish" When you get the popup about the default gateway, Click

"No"

19. Your full list of port groups should look like:

20. Repeat this entire task for the second ESX server. Add all networks with the same

labels and VLAN ID's. (capitalization counts!)

Page 22: Nexus 1KV Operating in UCS environment

22

Task 8 : Bring in some Prefab Virtual Machines

1. In the vSphere client, highlight either ESX server

2. Bring in the prefab virtual machine RH1:

a. Choose the Configuration tab on the right

b. Click Storage in the Hardware box

c. Highlight your datastore (snap-xxxx) in the list

d. Right-clickBrowse Datastore

e. In the datastore browser, single-click the directory (RH1) on the left

f. Highlight vmx file on the right (eg RH1.vmx)

g. Right-click and select “Add to Inventory”. In the wizard:

i. Name: RH1 and click Next

ii. Ensure your MyDC datacenter is selected iii. Highlight either ESX instance (IP) and click Next iv. Verify and click Finish

h. Close the datastore browser i. You will see your new VM (not yet powered on) underneath the ESX hosting it j. highlight the new VM and use the green arrow above to power it up k. One of the following will happen (why it varies is a mystery…)

i. You will get a pop-up asking if you moved or copied the VM. Select “I

copied it” and click OK ii. You will get a little hard to see yellow balloon on the VM icon. On the

VM name, Right Click Guest Answer Question and

you will get the popup. Select "I copied it" and click OK 3. Back in the vSphere client, highlight the name of your new virtual machine, access the

Summary tab, and wait until it reports that VMware tools is running and reports the

virtual machine IP address (next to the label IP Addresses, not the bottom once which is

the ESX IP.

4. You can access the virtual machine using ssh (putty) from your remote desktop as well

(login root/cangetin)

5. Repeat steps 2-3 for the other two prefab virtual machines (WinXP and Win2K08). The

password for each default user (StudentDude on XP, Administrator on

Win2K08) is cangetin. Once they are booted you can see their IP addreses from the

vSphere client as in step 3. You can also access these via RDP from your remote

desktop.

Page 23: Nexus 1KV Operating in UCS environment

23

Task 9: Perform vMotion on a Virtual Machine

This test will just ensure total sanity of your base system on the "old vSwitch", before we install

N1K and migrate connections to it.

1. In the vSphere client, highlight any Virtual Machine

2. Right click on the VM name in the vSphere client and choose Migrate…. In the wizard:

a. Choose Change host and click Next

b. Pick the ESX host on which the VM is not running right now, verify validation,

and click Next

c. Keep the default radio button (High Priority) and click Next

d. Review the information and click Finish. Your VM will migrate while staying

alive. e. Keep a ping going to a virtual machine, or a Remote Desktop or ssh session going

as you play with migtrating virtual machines back and forth. You may see only a

very brief drop in connectivity.

Page 24: Nexus 1KV Operating in UCS environment

24

Task 10: Install the Primary VSM for N1K

The first step for installing N1K is installing the Virtual Supervisor Machine (VSM). We will be

installing as a VM directly inside the ESX servers that will eventually be part of the same N1K.

This is supported and interesting to set up although it is not necessarily always the best choice.

1. In the vSphere client, highlight either of your ESX servers (not a virtual machine)

2. Right ClickNew Virtual Machine

3. In the Wizard:

a. Click Custom radio button and click Next

b. Enter the name VSMPrimary and click Next

c. Select your snap-xxx datastore and click Next

d. Keep the Virtual Machine Version 8 radio button and click Next

e. Change the "Guest Operating System" radio button to Linux. On the pull-down

menu, select Other Linux (64-bit) [second from the bottom]. Click Next

f. Leave it as 1 virtual socket and 1 core (this is all that is supported for VSM).

Click Next.

g. Change Memory to 2 GB. Make sure you get the right unit size. Click Next

h. Change How many NICs to 3. Choose the Network pull-downs thus (order

matters!)

i. NIC 1 – OLDSW-CONTROL

ii. NIC 2 – OLDSW-MGMT

iii. NIC 3 – OLDSW-PACKET

Click Next

i. Keep the radio button for LSI Logic Parallel. Click Next.

j. Keep the radio button for Create a new virtual disk. Click Next.

k. Change "Disk Size" to 4 GB. Keep all other defaults and click Next.

l. Keep all the defaults (SCSI(0:0) and unchecked "Independent") and click Next.

m. On review page, check Edit virtual machine settings checkbox and

click Continue.

n. On the Hardware tab, highlight New floppy (adding) and click Remove.

Everything else should be fine but check the NIC's carefully that you have

followed the instruction in step h. Click Finish.

4. Highlight your new VSMPrimary VM and invoke the virtual console. (use this: )

5. Click the green arrow power-on button (you will see it start to try to boot off the network

– we will interrupt that once we provide the virtual media)

6. Click the Virtual Media icon and select "Connect to ISO image on local disk as shown:

Page 25: Nexus 1KV Operating in UCS environment

25

a. From the file chooser, pick My Computer on the left and then navigate to File

Repository(V:)N1KNexus1000v.4.2.1.SV1.4aVSM

Install and choose the nexus-1000v.4.2.1.SV1.4a.iso file.

7. Click back inside your VSM console and do CTRL-ALT-INSERT to reset the VM. This

time it should boot off the virtual media,

8. Choose the first menu item ( Install Nexus100V and bring up the new

image) or just let the timer expire. The VSM will install automatically. There will be a

delay of only a minute or two (amazingly fast install) after the line that contains "Linux-

initrd.."

9. The installer will fall into the configuration dialogue. Answer thus: Enter the password for "admin": Cangetin1

Confirm the password for "admin": Cangetin1

Enter HA role [standalone/primary/secondary]:primary

Enter the domain id<1-4095>: use_your_pod_number!!

Would you like to enter the basic configuration dialog? yes

Create another login account: n

Configure read-only SNMP connunity string: n

Configure read-write SNMP connunity string: n

Enter the switch name : N1K

Continue with Out-of-band (mgmt0) management configuration? yes

Mgmt0 IPv4 address : 192.168.198.(100 + podnumber)

Mgmt0 IPv4 netmask: 255.255.255.0

Configure the default gateway? n

Configure advanced IP options? n

Enable the telnet service? n

Enable the ssh service? y

Type of ssh key your woul like to generate: rsa

Number of rsa key bits: 1024

Enable the http-server: y

Configure the ntp server? n

Vem feature level will be set to 4.2(1) SV1(4), Do you want to

reconfigure? n

Configure svs domain parameters? y

Enter SVS Control mode (L2/L3): L2

Enter control VLAN: 930

Enter packet VLAN: 931

Would you like to edit the configuration? n

Use this configuration and save it? y

Page 26: Nexus 1KV Operating in UCS environment

26

10. You should get the login prompt on the console. Quit out of the virtual console (you will

be unhappy with it since the cursor gets stuck in there since there is no VMware tools for

it).

11. Access your VSM using putty(ssh) and the IP address you entered for the VSM in step 9.

12. Log in using admin and the password (Cangetin1) you entered in step 9. This is

where you will do all your VSM driving from.

Task 11 (OPTIONAL): Install a Secondary VSM

If you install a secondary VSM, it will automatically replicate the primary's state, and take over

for the primary (including the mgmt IP address) if the primary fails. There is no automatic

failback, (if the "former primary" then reboots, it will become the secondary).

When a VSM is in the secondary role you can't do anything on it (every command will be

answered with "I am the secondary, talk to the other guy" kind of response). Therefore having a

secondary VSM won't add that much to the rest of this lab, but since you would definitely want

one in the real world:

1. Repeat task 9, steps 1-8 precisely except for the VM name (make it VSMSecondary)

2. In the setup dialogue after the new secondary VSM installs:

Enter the password for "admin": Cangetin1

Confirm the password for "admin": Cangetin1

Enter HA role [standalone/primary/secondary]: secondary

Setting HA role to secondary will cause a system reboot.

Are you sure? yes

Enter the domain id<1-4095>: use_your_pod_number!![must

match primary]

// It reboots into secondary mode. It has no name or IP. You can log in but it won't do

anything (try).

If you want to shut down the primary VSM (highlight its name in vSphere client and do

right-click-> Power Off) and watch the secondary takeover, go for it! The secondary will

turn into the primary. Your ssh connection to VSM will be disconnected but you can

reconnect to the same IP and get to the "new primary"

Page 27: Nexus 1KV Operating in UCS environment

27

Task 12: Copy VEM Software to ESX and Install

We will manually copy the VEM software, then log into ESX and install. The VEM software

here is one that was recently retrieved from cisco.com The VEM is also part of the N1K install

media, but we are using the latest one from cisco.com in the lab to make sure we have the latest

version for the optional VM-Fex addon to to the lab (which requires the latest VEM version).

1. Launch WinSCP from your remote desktop

2. For "Host Name", enter the IP of one of your ESX servers.. User the user/password

root/cangetin and keep everything else the same (SFTP is fine). Click Login…

3. Accept any ssh key warning

4. On the right (remote) side, move to (double-click) on the tmp folder

5. On the left side (local machine), navigate to V: arena, and then the VEM folder within:

6. Drag the .vib file (cross_cisco-vem-v132-4.2.1.1.4.1.0-3.0.4.vib)

over to the remote side (not into any of the sub-folders). Confirm on the popup that you

want to copy the file.

7. Quit out of WinSCP

8. Open a putty (ssh) connection to that same ESX server to which you just copied the VEM

software (same IP). Log in with root/cangetin

9. Install the VEM: # ls –l /tmp/*.vib

# esxcli software vib install –v /tmp/*.vib

Installation Result

Message: Operation finished successfully.

Reboot Required: false

VIBs Installed: Cisco_bootbank_cisco-vem-v132-

esx_4.2.1.1.4.1.0-3.0.4

VIBs Removed:

VIBs Skipped:

10. Repeat all of steps 1-9 (copy files and install the VEM) for the other ESX server

Page 28: Nexus 1KV Operating in UCS environment

28

Task 13: Install the VSM plug-in on vCenter

Before you can tell your VSM to connect to vCenter you need to retrieve an "extension" file

from the VSM and install it on the vCenter server. This will serve as an authentication so that

vCenter will allow the connection from the VSM. (Note vCenter could have any number of

these N1K extension files installed if you were using multiple N1K installations (in different

Datacenters) in the same vCenter.

1. Access http://IP_of_your_VSM from a web browser. Just use Firefox. Please.

Note this is the IP you gave to your VSM in step 10, 192.168.198.(100 + podnumber)

2. Right click on cisco_nexus_1000v_extension.xml and save the file. You can

save it anywhere (Downloads or My Documents is fine) as long as you remember

where it is.

3. In the vSphere client, choose the Plugins Manage Plugins:

4. At the white space near the bottom of the Plug-in Manager window, right-click to pop up

the New Plug-in button, and click on it.

5. In the Register Plug-in window,click on Browse… and navigate to and select the

extension file that you saved. You should be here:

6. After you double-click or Open the extension file, its contents will appear in the "View

Xml (read-only) area. Click Register Plus-in at the bottom..

7. Click Ignore on the certificate warning popup.

Page 29: Nexus 1KV Operating in UCS environment

29

8. Your extension will be registered in vCenter and you will see it in the Plug-in Manager:

9. Click "Close" at the bottom of the Plug-in Manager window to close it.

Task 14: Switch Between "Hosts and Clusters" and

"Networking" Views in vSphere Client

In many of the future tasks we will be switching back and forth between the "Hosts and Clusters"

view in vSphere client (the only one we have been using so far) and the Networking view, which

will show information specifically from the point of view of the DVS. There are a few ways to

navigate back and forth, the easiest is to click on the word "Inventory" in the white location bar

and follow the menus as shown:

1. Switch to the "Networking" view, as shown.

Your display should like like the following. Note that since the DVS does not exist yet,

all you are seeing are your "old vSwitch" port groups:

2. Stay on this View for now. As we go on, the instructions will say "switch to the Hosts

and Clusters view" and "switch to the Networking view" and you will know what to do.

Page 30: Nexus 1KV Operating in UCS environment

30

Task 15: Create the Connection from VSM to vCenter (and

the DVS)

Now that the extension key is successfully installed in vCenter we can connect our VSM to

vCenter. This will automatically create the N1K DVS within vCenter. The name of the DVS

will be the same as the hostname of your VSM.

1. On your remote desktop (which is the same as the vCenter server), figure out which IP

address you have on the 192.168.198. network (call this the vCenter IP)

[ run ipconfig /all in a cmd window. Make sure you get the one that begins with

192.168.198]

2. Log in (or access an existing login) to VSM

(192.168.198.(100 + podnumber) using putty (ssh)

3. Configure the connection parameters: N1K# conf t

N1K(config)# svs connection MyVC

// "MyVC" is just a random identifier for the connection --- doesn't have anything

// to do with vCenter machine name or datacenter name or anything

N1K(config-svs-conn)# remote ip address vcenter-IP-address

N1K(config-svs-conn)# protocol vmware-vim

N1K(config-svs-conn)# vmware dvs datacenter-name MyDCX

N1K(config-svs-conn)# connect

N1K(config-svs-conn)# copy run start

At this point look at the vSphere client (Networking view). You will see your DVS

named N1K (after VSM hostname) created inside a folder named N1K. The tasks at the

bottom of vSphere client will indicate this configuration is done by your Nexus 1000v.

When you are done, vSphere client should look like:

Page 31: Nexus 1KV Operating in UCS environment

31

Task 16: Create VLANs and Port Profiles for Uplinks on the

VSM

We will now start N1K configuration on the VSM. The first step is creatinVLANs and port

profiles for the uplinks. Remember we will have two types of uplinks --- one carrying only VM

data, the other for everything else.

1. Configure VLANs on the VSM: N1K# conf t

N1K(config)# vlan 930

N1K(config-vlan)# name control

N1K(config-vlan)# vlan 931

N1K(config-vlan)# name packet

N1K(config-vlan)# vlan 198

N1K(config-vlan)# name mgmt

N1K(config-vlan)# vlan 200

N1K(config-vlan)# name vmdata

N1K(config-vlan)# vlan 932

N1K(config-vlan)# name vmotion

N1K(config-vlan)# copy run start

N1K(config-vlan)# sh vlan

VLAN Name Status Ports

---- --------------------- --------- ------

1 default active

198 mgmt active

200 vmdata active

930 control active

931 packet active

932 vmotion active

.

.

.

Page 32: Nexus 1KV Operating in UCS environment

32

2. Configure a port-profile for the VM data (only) uplinks:

Note the following:

a. we are creating a port profile with mode "pivate-VLAN trunk promiscuous" to

enable private VLAN configuration within the data VLAN in a later lab

b. When you finish with "state enabled", the port profile information will get pushed

as a port group to vCenter

N1K(config)# feature private-vlan

N1K(config)# port-profile type eth data-uplink

N1K(config-port-prof)# switch mode private-vlan trunk

promiscuous

N1K(config-port-prof)# swi priv trunk allow vlan 200

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# channel auto mode on mac-pinning

N1K(config-port-prof)# state enabled

Note if you are looking at vCenter (Networking view), you can see the new port-group

data-uplink get created.

3. Configure a port-group for uplinks for all other traffic:

Note that system VLAN is required for management, control, packet. It means that that

as soon as a VEM (ESX) is attached to the DVS, these VLANs will be configured as part

of the startup configuration of the VEM so that it can communicate with VSM (solves a

chicken and egg problem).

N1K(config-port-prof)# conf t

N1K(config)# port-profile type eth other-uplink

N1K(config-port-prof)# switchport mode trunk

N1K(config-port-prof)# switch trunk allow vlan 198,930-932

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# channel auto mode on mac-pinning

N1K(config-port-prof)# system vlan 198,930,931

N1K(config-port-prof)# state enabled

N1K(config-port-prof)# copy run start

Note if you are looking at vCenter (Networking view), you can see the new port-group

other-uplink get created.

Page 33: Nexus 1KV Operating in UCS environment

33

Task 17: Add the Two ESX Hosts to the DVS

This procedure adds the ESX hosts as members of the DVS. You will be specifying the

appropriate uplink port groups (from the VSM port profiles) to attach to specific uplinks:

1. In vSphere client, go to the Networking view (if you are not already there)

2. Single click to highlight the N1K DVS (not the folder, the guy inside the folder) on the

left.

3. Right click Add Host…

4. Set the next window precisely as in the screenshot below (IP's of your hosts will be

different). For both hosts, we know which uplinks will (can) carry VM data, and which

carry all other data.

Choose the port groups circled below from the pull-down menus. You must check the

checkboxes on the left for every host and NIC you are choosing, as shown:

5. Click Next

6. Leave the page alone (everything says "Do not migrate". We will migrate vmk interfaces

(management and vMotion) to the DVS later. Click Next.

7. Leave the page alone. We will migrate VM networking to the DVS later. Click Next.

8. Click Finish.

Page 34: Nexus 1KV Operating in UCS environment

34

9. Your hosts are added to the DVS. To Confirm:

a. In the VSM, you will see messages about the new VEM (3 and 4) modules being

discovered. On the VSM run:

N1K# show module

// First supervisor installed is always 1, second supervisor always 2,

// VEM's are numbered starting at 3

b. Staying on Networking view, click the Hosts tab on the right. You will see your

two hosts connected.

c. Switch to the Hosts and Clusters view.

d. Highlight one of your hosts (not a VM) on the left

e. Click the Configuration tab on the right

f. Click Networking (in the Hardware box).

g. Click vSphere Distributed Switch to show the DVS

Task 18: Create "veth" Port Profiles (for Virtual Machines)

Each of these port profiles will specify how a VM adapter is attached to the DVS. Since none of

our VM's is VLAN-tag aware, we make all these port profiles "access" on a single VLAN.

(if you had a VM that could handle VLAN tagging downstream, you could be completely free to

set up "veth" profiles as trunks)

1. Switch to the Networking view in vSphere client so you can port groups get pushed in.

2. On the VSM, create a "veth" profile for VM data: N1K# conf t

N1K(config)# port-profile type veth vmdata

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access vlan 200

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# state enabled

Page 35: Nexus 1KV Operating in UCS environment

35

3. On the VSM, create a "veth" profile for VM data: N1K(config-port-prof)# port-profile type veth vmotion

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access vlan 932

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# state enabled

4. On the VSM, create a "veth" profile for management (this has to be marked as system

VLAN as discussed earlier) N1K(config-port-prof)# port-profile type veth mgmt

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access vlan 198

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# system vlan 198

N1K(config-port-prof)# state enabled

5. On the VSM, create a "veth" profile for control (this has to be marked as system VLAN

as discussed earlier).

Note: we need this only because the VSM is going to run as a VM inside this same

DVS.

N1K(config-port-prof)# port-profile type veth control

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access vlan 930

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# system vlan 930

N1K(config-port-prof)# state enabled

6. On the VSM, create a "veth" profile for packet (this has to be marked as system VLAN

as discussed earlier).

Note: we need this only because the VSM is going to run as a VM inside this same

DVS.

N1K(config-port-prof)# port-profile type veth packet

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access vlan 931

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# system vlan 931

N1K(config-port-prof)# state enabled

N1K(config-port-prof)# copy run start

Page 36: Nexus 1KV Operating in UCS environment

36

Task 19: Migrate Virtual Machine Networking to the N1K

We will hook up virtual machines to the DVS. Remember the model that while all port

configuration is done on the VSM, the "virtual wiring" of VM's to these ports is done from

vCenter.

In the vSphere client:

1. Go to the "Hosts and Cluster" view

2. Highlight your RH1 virtual machine.

3. Right click Edit Settings…

4. In the Properties popup, highlight Network adapter 1

5. On the right side, change the "Network Label" pulldown to vmdata(N1K). You should

be here:

6. Click OK

7. Run ping and/or putty(ssh) from your remote desktop to the IP for RH1 (remember you

can get it from the Summary tab for the VM – not the one in blue (which is ESX IP).

8. Repeat steps 2-6 for the other two VM (WinXP and Win2K08). You can test with ping or

an RDP connection from the remote desktop.

In the VSM, see how virtual ports are represented: N1K# sh port-profile usage name vmdata

N1K# sh int virtual

N1K# sh int veth1 //nice to have "real insight" into VM networking

Page 37: Nexus 1KV Operating in UCS environment

37

Task 20: Migrate VSM(s) to the DVS

Here we will do the same operation for the VSM VM itself (and VSM secondary, if you have

one).

In the vSphere client:

1. Highlight your VSM(Primary) virtual machine.

2. Right click Edit Settings…

3. Make the change to the Network adapters in same manner as before, but this time, change

all 3 VSM adapters: (in this order!!!!!)

Network adapter 1 control(N1K)

Network adapter 2 mgmt(N1K)

Network adapter 3 packet(N1K)

4. Click OK

5. Repeat steps 1-4 for VSM Secondary machine, if you have one

In the VSM, prove that everything still looks normal:

N1K# sh module

N1K# sh int virtual

Task 21: Migrate vmk's (ESXi management and vMotion) to

the DVS

We will move the remaining functionality to the DVS. After this step, the "old vSwitch" is no

longer used at all, and in the real world you could delete it completely.

1. On the vSphere client, go to the Hosts and Clusters view, if not there already

2. Highlight one of your ESX servers (not a VM)

3. Click the Configuration tab

4. Click Networking (in the Hardware box)

5. At the top on the right change the "View:" to vSphere Distributed Switch if it

is not there already.

6. To the right of the label "Distributed Switch:N1K" (just above the picture, click the blue

link Manage Virtual Adapters.

7. In the popup, click the blue "Add" link at the top

8. Select the radio button Migrate existing virtual adapters and click Next

9. Select the two appropriate port groups for the virtual adapters, as shown:

10. Click Next. Click Finish to confirm and close the popup

11. Repeat steps 2-10 precisely for your other ESX server

Page 38: Nexus 1KV Operating in UCS environment

38

Task 22: Examine vMotion and the DVS

You will note that across vMotion, a VM retains its virtual port identity (and therefore all of its

characteristics) on the DVS.

1. On the VSM, show all virtual ports (note that you will see 4 vmk's now as well –

management and vMotion on each ESX server):

N1K# sh int virtual

2. Pick one of the regular virtual machines (RH1, WinXP, Win2K08) and note its vethX

identity. Look at the details of that port, including packet counts, etc.

N1K# sh int vethX

3. On vSphere client, migrate that virtual machine to the other ESX server:

a. go to Hosts and Clusters view if not already there

b. highlight the VM name

c. Right Click Migrate…

d. Change host radio button and Click Next

e. Choose the ESX server on which the VM is not currently running and Click Next

f. Confirm (High Priority) and click Next and Finish and watch the VM migrate

g. Repeat steps 1 and 2 on the VSM:

i. Note that the port identity is the same

ii. VSM knows it has moved to the other VEM module (and server)

iii. the port details are still identical (including preserving packet counts, etc)

Task 23: Apply IP Filtering down to VM Level

Starting with this task we will examine some of the enterprise-level network administration that

extends down to the VM level with Nexus 1000.

The first feature is IP filtering, or IP-based access lists.

Filters are defined using rules that match layer 3 or layer 4 headers

Filters are applied to either individual (veth) ports or to port-profiles. It is normally best practice

to apply a filter to a port-profile (since this is pretty much the point of port-profiles). If you end

up thinking you want to apply a filter to only some VM's that are part of a port-profile, you

probably really want two different port-profiles.

1. From your remote desktop, verify that you can establish an RDP connection to the

Win2K08 machine (remember to get its IP, highlight the VM in vSphere client and look

at summary tab). You don't need to log in, just establishing the connection is fine (if you

want to log in, remember it is administrator / cangetin .

Page 39: Nexus 1KV Operating in UCS environment

39

2. Create an IP filter (access-list) on the VSM (NoRDP is just a name you are inventing for

the filter).

N1K# conf t

N1K(config)# ip access-list NoRDP

N1K(config-acl)# deny tcp any any eq 3389

N1K(config-acl)# permit ip any any

3. Create a vmdata-nordp port profile and specify the IP filter within it:

N1K(config-port-prof)# port-profile type veth vmdata-nordp

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access VLAN 200

N1K(config-port-prof)# ip port access-group NoRDP out

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# state enabled

N1K(config-port-prof)# copy run start

4. From the vShere client Hosts and Clusters view

a. Highlight the VM Win2K08

b. Right Click Edit Settings

c. Change Network Adapter 1 label (pulldown menu) to vmdata-nordp(N1K)

and click OK

d. Verify that you cannot establish an RDP session to the Win2K08 any more (or

that an existing connection is dead)

e. Verify that you can still ping the Win2K08

f. Verify that you can still RDP to the WinXP virtual machine (it is still under the

original non-filtered profile)

5. Reattach the Win2K08 to the regular vmdata(N1K) port group, and make sure your

ability to RDP to this VM is restored.

Note you can just leave the filtered port-profile configured (you don't need to delete it)

and just attach it to VM's, as needed. Hence the best practice of always applying filters to

port-profiles rather than individual veth ports.

Page 40: Nexus 1KV Operating in UCS environment

40

Task 24: Configure Private VLANs

Private VLANs allow machines that appear to be on the same VLAN (ie can be on the same

subnet) but can be isolated from each other.

Some quick definitions:

Primary VLAN: all participants seem "from the outside worlds" to belong to this VLAN

Isolated VLAN (within the primary): machines on this VLAN are isolated from any

other machine on an isolated or community VLAN

Community VLAN (within the primary): machines on this VLAN are not isolated from

each other, but are isolated from machines on an isolated VLAN.

Promiscuous Port: the "ingress/egress" to the Private VLAN environment. All inbound

packets appear tagged with the primary VLAN. As they enter they mapped to an isolated

or community VLAN. Any packets leaving (outbound) on the promiscuous port are

mapped back to the primary VLAN.

Note: traffic is interrupted on a VLAN that turns into a private primary (as we are doing

with 200 below). You will see if you try to ping a VM between steps 1 and 2 that you can't.

The traffic will be restored after step 2. In the "real world" you should design private

VLANs before any traffic on them becomes critical!

1. On the VSM, declare our normal VM data VLAN 200 as a private-VLAN primary.

Create VLAN 2009 as an isolated VLAN and VLAN 2999 as a community VLAN.

N1K# conf t

N1K(config)# vlan 200

N1K(config-vlan)# private vlan primary

N1K(config-vlan)# vlan 2009

N1K(config-VLAN)# private-vlan isolated

N1K(config-VLAN)# vlan 2999

N1K(config-VLAN)# private-vlan community

N1K(config-VLAN)# vlan 200

N1K(config-VLAN)# private-vlan association add 2009,2999

N1K(config-VLAN)# sh vlan private-vlan

Primary Secondary Type Ports

------- --------- --------------- -------

200 2009 isolated

200 2999 community

2. Set the promiscuous port-profile (data-uplink) to map between the primary and the

isolated and community VLANs: N1K(config-VLAN)# port-profile data-uplink

N1K(config-port-prof)# switch priv mapping trunk 200 2009,2999

Page 41: Nexus 1KV Operating in UCS environment

41

3. Make sure traffic on the vmdata VLAN (200) is flowing as usual (make sure you can

ping, RDP, ssh as appropriate)

4. Create port-profiles to attach VM's to the new isolated and community VLANs. N1K# conf t

N1K(config)# port-profile type veth vm-isolated

N1K(config-port-prof)# switch mode private-vlan host

N1K(config-port-prof)# switch priv host-assoc 200 2009

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# state enabled

N1K(config-port-prof)# port-profile type veth vm-community

N1K(config-port-prof)# switch mode private-vlan host

N1K(config-port-prof)# switch priv host-assoc 200 2999

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# state enabled

N1K(config-port-prof)# copy run start

5. In the vSphere client, attach (Network adapter 1 of) RH1 to the vm-isolated(N1K)

port-group (you should know how to do this by now )

6. In the vSphere client, attach (Network adapter 1 of ) both WinXP and Win2K08 to the

vm-community(N1K) port group

7. Verify that you can still access everything "from the outside" as if all the VM were still

on VLAN 200 (ie just ping, RDP, ssh as appropriate from the remote desktop).

8. Log into the RH1 (you know how to get its IP…) via putty(ssh) (root/cangetin)

9. Verify that you cannot ping the two Windows VM's (or access any other port: you can

try telnet IP_of_Win2K08 3389 for example).

10. From the remote desktop, log into the Win-XP VM via RDP

(studentdude/cangetin). This should work fine

11. Invoke a cmd window on the XP and verify that you can ping the Win2K08 VM, but that

you cannot ping the RH1.

12. From vSphere client, restore all the VM's to the regular vmdata(N1K) port group.

Verify that everyone can talk to each other now.

It is fine to leave the private VLAN configuration in place and have everyone openly

communicating on the primary VLAN, saving any port-profiles referring to other isolated

or community VLANs for later use.

Page 42: Nexus 1KV Operating in UCS environment

42

Task 25: Configure ERSPAN (Encapsulated Remote Switch

Port Analysis)

You will be able to monitor (snoop on) a veth port for a particular VM. That traffic will be

forwarded for analysis to a different virtual machine. The "ER" part of this means that we will

be encapsulating the forwarded snooped packets in layer 3 (so we COULD forward them across

routers).

Note how "cool" this is. Networking from a VM, even when it doesn't exit the same ESX!, is

subject to the oversight of the enterprise network administrator in charge of the N1K.

What we are configuring:

The virtual port connected to WinXP will be monitored.

Traffic will be forwarded via Layer 3 to the Win2K08 VM for analysis

We can try to find evidence of the user on WinXP connecting via his web browser to

facebook, as an example of a site who use is prohibited from inside our fictional

enterprise due to eternal employee time wastage

Note: This monitoring does get configured on an individual (veth) port rather than a port-

profile.

1. On the VSM, identify the veth port connected to the WinXP (the source)

N1K# sh int virt

//Find the interface for WinXP, we call it vethX

2. In vSphere client, find the IP for the Win2K08 VM (the destination)

3. Configure an ERSPAN session on this particular veth:

N1K# conf t

N1K(config)# monitor session 1 type erspan-source

N1K(config-erspan-src)# description "Monitor XP"

N1K(config-erspan-src)# source interface vethX both

N1K(config-erspan-src)# destination ip Win2K08IP

N1K(config-erspan-src)# erspan-id 999

N1K(config-erspan-src)# no shut

Page 43: Nexus 1KV Operating in UCS environment

43

4. Configure a port profile for a vmk insterface to transport the ERSPAN session.

Nexus 1000v uses a VMkernel interface to implement the Layer 3 transport when using

ERSPAN. l3control will mark the interface as a being used for an internal L3

functionality for N1K. As usual, doing port configuration via new port profiles is best.

N1K(config-erspanpsrc)# port-profile ERSPAN

N1K(config-port-prof)# capability l3control

N1K(config-port-prof)# switch mode access

N1K(config-port-prof)# switch access vlan 200

N1K(config-port-prof)# vmware port-group

N1K(config-port-prof)# no shut

N1K(config-port-prof)# system vlan 200

N1K(config-port-prof)# state enabled

N1K(config-port-prof)# copy run start

5. Configure the new vmk in vSphere client to use the profile:

a. Go to Hosts and Clusters view if not already there

b. Highlight either of your ESX servers (not a VM)

c. Go to Configuration tab and click Networking (in Hardware box)

d. Change view to vSphere Distributed Switch if not already there

e. Click Manage Virtual Adapers

f. In the popup, click the blue Add (you should be here now:)

g. Choose New Virtual Adapter radio button and click Next

h. Choose VMKernel (only choice) and click Next

i. Keep Select port group radio button and select your ERSPAN port group

from the pulldown menu. Click Next.

j. Choose Obtain IP settings automatically radio button and click Next

k. Click Finish. When warned on popup about absence of default gatway, click No.

Page 44: Nexus 1KV Operating in UCS environment

44

l. Back on the picture of the switch, verify there is a new vmk under the ERSPAN port

group. Open it up (with the +) and verify it has an IP from DHCP on our data net.

6. Show that the monitor session is active from the N1K point of view (on the VSM): N1K# sh monitor session 1

7. Establish an RDP connection to the WinXP and log in (studentdude/cangetin)

and launch the web browser of your choice. This will be the traffic that we are

monitoring.

8. Establish an RDP connection to the Win2K08 and log in

(administrator/cangetin). Launch Wireshark from the desktop or start menu.

This will be our view of the packets being snooped.

9. In Wireshark, Click on the VMware vmxnet3 virtual network device (blue

link):

10. Choose the interface with the address on or data network and click Start:

11. Fill in the filter at the top of the Wireshark capture window as shown and apply:

12. Go back to the WinXP machine:

13. Browse for some fun stuff that may violate our fictional company policy (facebook, or

whatever).

14. Back in the Wireshark on the Win2K08, note that the traffic is being captured. Highlight

a single packet and play with viewing all the details at every layer.

Page 45: Nexus 1KV Operating in UCS environment

45

Optional Lab Addendum: Removing Nexus1000v

The reasons for being interested in this lab addendum are:

1. You are going to continue on to the optional VM-FEX addendum that comes after this

addendum. VM-FEX is a different DVS implementation, and you can only have one

DVS at a time in a particular ESX instance.

2. You are just interested in general purposes about how to "back out of" a Nexus 1000v

configuration.

Task R1: Put VM Network Adapters Back on "Old vSwitch"

1. In vSphere client, go to the Hosts and Clusters view if you are not already there Highlight

your RH1 virtual machine.

2. Right Click Edit Settings and reattach your Network Adapter 1 to VM Network (you

should know how to do this by now )

3. Repeat steps 1 and 2 for the WinXP and Win2K08 Virtual Machines

4. Right click the VSMPrimary virtual machine

5. Right Click Edit Settings and reattach the three network adapters (in this order) to

OLDSW-CONTROL, OLDSW-MGMT, OLDSW-PACKET

6. Repeat step 5 for VSMSecondary if you have one

Task R2: Migrate VMKernel Adapters Back to "Old vSwitch"

1. In Hosts and Clusters view (where you already are), highlight one of your ESX server

names (not a VM name)

2. Go to Configuration tab and click Networking (in the Hardware box)

3. Make sure the view is on vSphere Distributed Switch

4. Click Manage Physical Adapters… (in blue to the right of Distributed Switch:

N1K

5. If there is a vmk2 (from the ERSPAN exercise), highlight it, click Remove, and confirm

6. In the popup, click on vmk1 (which is for vmotion) and click Migrate (in blue at the

top)

7. Verify that vSphere Standard Switch Name vSwitch0 is selected and click Next

8. Make Network Label: OLDSW-VMOTION and VLAN ID: 932 and click Next

9. Click Finish (this vmk will be migrated back to the old vSwitch

10. Still in the Manage Virtual Adapters popup, click vmk0 and Migrate

11. Verify that vSphere Standard Switch Name vSwitch0 is selected and click Next

12. Confirm (don't touch) Network Label: VMkernel and VLAN ID: None(0) and click Next

13. Click Finish (this vmk will be migrated back to the old vSwitch)

14. Close the Manager Physical Adapters popup

15. Repeat steps 1-13 for the other ESX server

Page 46: Nexus 1KV Operating in UCS environment

46

Task R3: Remove the Hosts from the DVS

1. Go to the Networking View 2. Highlight the N1K DVS (not the folder, the DVS within) 3. Click on the Hosts tab 4. Highlight your first ESX host 5. Right Click Remove from vSphere Distributed Switch (2

nd from bottom of popup

menu) 6. Confirm that you want to remove the host from the DVS (click Yes) 7. Repeat steps 4-6 for the second ESX host

Task R4: Remove DVS (through Commands on VSM)

1. Access the VSM 2. Run commands to remove the DVS:

N1K# conf t

N1K(config)# svs connection MyVC

N1K(config-svs-conn)# no vmware dvs

If you look (still in the Networking View) at vSphere client, you should see your N1K

DVS and all its port groups disappear.

At this point, you could delete your VSM(s). If you are continuing on to the VM-FEX

addendum, you don't need to worry about it, just leave them "dangling", shut them off,

whatever you want .

Page 47: Nexus 1KV Operating in UCS environment

47

Optional Lab Addendum: VM-FEX

VM-FEX is an integrated UCS feature that works together with ESX and ESXi 4.0 Update 1 and

above. VM-FEX presents an ESX distributed virtual switch (DVS) that passes virtual machine

networking directly up to UCS-controlled vNICs. With this feature, virtual machines become

visible in UCS manager, and virtual machine networking can be done with the exact same

policies as physical machine networking. All of this is within a single UCS.

The architecture for this feature is described by the figure below. A good summary is:

a. The Virtual Ethernet Module (VEM) must be installed in each ESX. This is the same

VEM used for Nexus 1000v (in this lab, you do not need to reinstall it)

b. A set of "dynamic vNICs" is created in UCS Manager for the server (service profile). All

you need to know in advance is how many you want.

c. Port profiles for VM networking are created on UCS manager. This defines VLAN(s)

and network and Qos policies. These are the exact same sets of policies used in UCS

with VM networking. These are sent to vCenter as port groups to which to attach VM's.

d. When a VM is attached in vCenter, a dynamic vNIC is chosen on which to "pass

through" the traffic directly to the UCS control. The VM automatically becomes visible

in UCS.

e. Regular "static" vNICs are defined as placeholders for the DVS uplink. There is no real

data on this DVS uplink (since all data travels on the dynamic vNICs), but VMware

needs an uplink defined anyway.

Page 48: Nexus 1KV Operating in UCS environment

48

UCS VM-FEX with DirectPath I/O (High-Performance Mode)

The normal functionality of the VM-FEX feature is not hypervisor-bypass (although sometimes

it is incorrectly described as such). Instead it is pass-through --- the VEM running in software in

the ESX is still responsible providing a virtual port attached to a VM virtual adapter, and passes

through the traffic to the appropriate uplink. This normal functionality works with any choice of

virtual adapter provided to the VM.

UCS VM-FEX has a "high performance mode" that can be requested via the port-profile for all

or some VM's (mixtures of VM's running in normal mode and those running in high-

performance mode are fine).

In high performance mode, VM traffic is migrated from the normal "pass-through" model to a

real bypass model, where it is not handled by the VEM. Traffic is routed directly to the Cisco

Virtual NIC (M81KR in our example), which itself can perform virtual adapter emulation. In this

case the virtual adapter provided to the VM must be of type vmxnet3. The picture looks like

this:

Page 49: Nexus 1KV Operating in UCS environment

49

Comparing VM-FEX High Performance Mode to VMware (no DVS)

VMDirectPath

VMware has a VMDirectPath feature which lies outside the scope of the DVS model. In

VMWare's VMDirectPath, PCI devices (including network adapters) are simply delegated

entirely to a virtual machine.

VMDirectPath has some disadvantages which are solved by the UCS VM-FEX high

performance mode feature:

VMDirectPath (no DVS) UCS VM-FEX High Performance

VM OS must have correct drivers for

actual network hardware (eg Cisco enic)

VM still uses virtual adapter (vmxnet3)

Migration (vMotion) may be possible

(ESXi 5.0 and above only)

(manual matching configurations required

on respective ESX)

Migration (vMotion) always possible

VM automatically moved to traditional pass-

through mode for vMotion, then moved back to

Direct mode on new ESX

No integration with DVS

Completely separate admin for VMs that

can use VMDirectPath and others that

may be on a DVS

Complete integration with DVS

VMs configured identically always whether or not

using DirectIO (high performance mode) feature

Comparing VM-FEX to Nexus 1000v

Nexus 1000v is another DVS that can be used with UCS. It's goals end up being quite different

than the VM-FEX DVS. One would not necessarily just choose the VM-FEX because it is part

of UCS; rather one needs to understand the goals of each DVS and match their needs.

Feature VM-FEX Nexus 1000v

VM networking integrated with UCS Manager

in a single UCS yes no

Same policies used to manage physical

machine networking and VM networking yes no

Optimize performance with hypervisor bypass yes no

Management across multiple UCS, different

platforms across entire datacenter no yes

Advanced Nexus features applied to VM

(ERSPAN, ip filtering, etc, etc, etc) no yes

Page 50: Nexus 1KV Operating in UCS environment

50

Task V1: Create a BIOS Policy to Enable all VT Direct IO

features and Attach it to your Service Profile Template

1. In UCS Manager, On the Servers tab, navigate to: PoliciesrootSub-OrganizationsyourorgBIOS Policies

2. Highlight the word BIOS Policies, Right Click Create BIOS Policy

3. In the popup, create a name AllDirectIO

4. Leave the Reboot on BIOS Settings Change unchecked

// yes BIOS changes require reboot, but you are going to do something else

// that requires reboot in the very next task

5. Click directly on Intel Directed IO on the left

6. Clicked the "enabled" radio button for all 5 items 7. Click Finish

Go back to your service profile template (Service Profile Templates

rootSub-Organizationsyourorg; highlight ESXTemplate)

8. Click on Policies tab

9. Expand BIOS Policies and select your new AllDirectIO

10. Click Save Changes and acknowledge the confirmation

Task V2: Create Dynamic VNICs for VM-FEX in your Service

Profile Template

1. Still highlighting your template on the left, click the Network tab and choose Change

Dynamic vNIC connection policy on the left.

2. Fill in the pop-up as such (local policy with 6 dynamic vNICs, using the

VMWarePassThru adapter policy (this optimizes some adapter details for the VM-FEX):

3. Click OK 4. You will get a popup warning you that your ESX servers with service profiles derived

from this template will reboot. Click Yes and then OK

5. Your ESX servers will reboot. In vSPhere client Hosts and Clusters view, wait until both

ESX servers are rebooted and reconnected to vCenter (they will go back to solid text – no

longer italicized and/or fuzzy). You can watch the reboots by looking at the server KVM

Consoles as well, if you like.

Page 51: Nexus 1KV Operating in UCS environment

51

Task V3: Check VM Lifecycle Policy

Once the VM-FEX DVS is all set up and VM's are attached to it, the VM's will automatically

appear in UCS Manager. When VM's are detached or no longer exist, they are cleaned up from

UCS Manager. This policy controls how often the cleanup polling takes place.

1. Highlight the word All in the nav. pane and go to the Lifecycle Policy tab

2. Both policies should already be set to 1 Min (radio button). You will not have sufficient

privileges to change this policy.

Task V4: Install the UCS extension file plug-in on vCenter

Before you can tell your UCS Manager to connect to vCenter you need to retrieve an "extension"

file from UCS Manager and install it on the vCenter server. This will serve as an authentication

so that vCenter will allow the connection from the UCS Manager. (Note vCenter could have any

number of these extension files installed if you were using multiple DVS installations (in

different Datacenters) in the same vCenter.

1. In UCS Manager, go to the VM tab in the nav. pane

2. Highlight the word VMware in the nav. pane and go to the General tab.

3. Click Export vCenter Extension (you should be here, as shown):

4. In the Save Location pop-up, click the … and navigate to the folder where you want

to store the extension file (My Documents is fine). Click Select

5. Click OK to save the file (it will be called

cisco_nexus_1000v_extension.xml), yes even 'though this is an extension for

UCS Manager rather than Nexus 1000v. If this conflicts with another file previously

downloaded for N1K lab, feel free to overwrite the old one.

6. In the vSphere client, choose the PluginsManage Plugins:

Page 52: Nexus 1KV Operating in UCS environment

52

7. At the white space near the bottom of the Plug-in Manager window, right-click to pop up

the New Plug-in button, and click on it.

8. In the Register Plug-in window,click on Browse… and navigate to and select the

extension file that you saved.

9. After you double-click or Open the extension file, its contents will appear in the "View

Xml (read-only) area. Click Register Plus-in at the bottom..

10. Click Ignore on the certificate warning popup.

11. Your extension will be registered in vCenter and you will see it (Cisco-UCSM-xxxxxx)

in the Plug-in Manager under Available Plug-ins (there is nothing else you need to do).

12. Click "Close" at the bottom of the Plug-in Manager window to close it.

Task V5: Create the vCenter Connection and the DVS from

UCS Manager

Now that the extension key is successfully installed in vCenter we can connect our UCS

Manager to vCenter and create the VM-FEX DVS

4. On your remote desktop (which is the same as the vCenter server), figure out which IP

address you have on the 10.0.8. network (call this the vCenter IP)

[ run ipconfig /all in a cmd window. Make sure you get the one that begins with

10.0.8.]

5. In UCS Manager, go to the VM tab in the nav. pane if not already there.

6. Highlight the word VMware on the left and go to the vCenters tab (there should not be

anything listed here yet).

7. Click the green "+" on the right

8. The name for vCenter is arbitrary but we cannot conflict with other groups using the

same UCS. Use MyVCX where X is your pod number and the IP you discovered in step 1.

Make sure to use the proper podnumber suffix to prevent our cleanup scripts from

wiping out your configuration when cleaning up another pod.

9. Click Next

10. Under Folders, leave blank and click Next (we do not have a folder outside the

DataCenter object)

Page 53: Nexus 1KV Operating in UCS environment

53

11. Under Datacenters enter the name of your Datacenter MyDCX (this must match the

datacenter name in vCenter) and click Next.

12. Under Folders (this is now folders inside the datacenter, and is required), click the

green "+ Add" at the bottom and enter RedSox (this is a new name that will be pushed to

vCenter). Click Next.

13. Under DVSs, click the green "+ Add" at the bottom and enter a name for the DVS (this is

arbitrary). Use MyVMFex. Click the Enable radio button and click OK

14. Click Finish for the DVSs.

15. Click Finish for the Folders (you can see this sort of rolls backwards)

16. Click Finish for the DataCenters and OK to confirm creation. This pushes the folder and

DVS information to vCenter.

At this point look at the vSphere client (Networking view). You will see your DVS

named MyVMFex created inside a folder named RedSox. The tasks at the bottom of

vSphere client will indicate this configuration is done by your UCS Manager. When you

are done, vSphere client should look like:

Page 54: Nexus 1KV Operating in UCS environment

54

Task V6: Create Port Profiles for VM's in UCS Manager

Port profiles are created in UCS Manager; they define VLAN(s) and other properties (such as

Qos) for virtual machine networking. You would create separate port profiles, for example, for

any set of virtual machines which needs to connect to any different VLAN(s). Port profiles are

pushed to vCenter by attaching "Profile Clients" to them (profile clients just describe which

vCenters / DataCenters / DVS's should receive the corresponding port profile).

4. In UCS Manager on the VM tab, highlight the word Port Profiles on the left and

click the green "+" way on the right.

5. Fill in the form – use the name vmdataX (where X is your pod number) and leave all

other policies default (we will investigate high performance later). Choose only the

ServerData VLAN (and make it native).

Unfortunately there is no org-structure behind these port profiles so we will have to use

the unique names (with the X) and you may see lots of similar port profiles from different

groups. Make sure you use the proper podnumber suffix to prevent our cleanup

scripts from wiping out your configuration when cleaning out other pods.

6. Click OK and OK again to acknowledge

Page 55: Nexus 1KV Operating in UCS environment

55

7. On the nav. pane on the left, open up Port Profiles and highlight Port Profile vmdata

8. Click the Profile Clients tab

9. Click the green "+" on the far right to bring up the profile clients form (defines which

vCenters / folders / DataCenters to which to push the profile)

10. Create a client name (this is arbitrary) vmdataX where X is your pod number. Make

sure you change the datacenter to your datacenter name (do not pick All, do not pick your

friend's datacenter name….): You can leave everything else as is:

11. Click OK

12. Back in vShphere client on Networking view (where you should be already), you should

see a port group named vmdataX get added to your DVS.

13. Repeat steps 1-9 (both port profile and profile client) twice with these values. Make sure

the native VLAN radio button is checked for the single VLAN for each profile.

Port profile and profile client name: mgmtX vlan(native): Mgmt-Kickstart

Port profile and profile client name: vmotionX vlan(native): VMotion

14. You should now see the three port groups in vSphere client Networking view

Page 56: Nexus 1KV Operating in UCS environment

56

Task V7: Add the Two ESX Hosts to the DVS

This procedure adds the ESX hosts as members of the DVS. You will be specifying the

appropriate uplink port groups (from the VSM port profiles) to attach to specific uplinks:

10. In vSphere client, go to the Networking view (if you are not already there)

11. Single click to highlight the MyVMFex DVS

12. Right clickAdd Host…

13. Set the next window precisely as in the screenshot below (IP's of your hosts will be

different). For both hosts, we are just identifying the (placeholder) DVS uplinks. Recall

that the only purpose of these is to make the VMware DVS concept happy.

You must check the checkboxes on the left for every host and NIC you are choosing, as

shown:

14. Click Next

15. Leave the page alone (everything says "Do not migrate". We will migrate vmk interfaces

(management and vMotion) to the DVS later. Click Next.

16. Leave the page alone. We will migrate VM networking to the DVS later. Click Next.

17. Click Finish.

Page 57: Nexus 1KV Operating in UCS environment

57

18. Your hosts are added to the DVS. To Confirm:

h. Staying on Networking view, click the Hosts tab on the right. You will see your

two hosts connected.

i. Switch to the Hosts and Clusters view.

j. Highlight one of your hosts (not a VM) on the left

k. Click the Configuration tab on the right

l. Click Networking (in the Hardware box).

m. Click vSphere Distributed Switch to show the DVS

Page 58: Nexus 1KV Operating in UCS environment

58

Task V8: Migrate Virtual Machine Networking to the DVS

We will hook up virtual machines to the DVS. Remember the model that while all port

configuration is done on UCS Manger, the "virtual wiring" of VM's to these ports is done from

vCenter.

In the vSphere client:

9. Go to the Hosts and Clusters view

10. Highlight your RH1 virtual machine.

11. Right click Edit Settings…

12. In the Properties popup, highlight Network adapter 1

13. On the right side, change the "Network Label" pulldown to vmdataX(MyVMFex).

You should be here:

14. Click OK

15. Run ping and/or putty(ssh) from your remote desktop to the IP for RH1 (remember you

can get it from the Summary tab for the VM – not the one in blue (that is ESX IP).

Page 59: Nexus 1KV Operating in UCS environment

59

16. Examine how virtual machines appear in UCS Manager (they are visible, although there

are not any administrative actions. On the VM tab, open up the tree underneath VmWare

Virtual Machines. Your ESX servers may be mixed up among other servers

from other groups of students, so look carefully at the servers (chassis/slot) to find yours.

You can go all the way down to the vNIC level, and see that a vNIC has been associated

with the VM and inherited the MAC address for the VM.

17. Repeat steps 2-6 for the other two VM (WinXP and Win2K08). You can test with ping or

an RDP connection from the remote desktop. You should see these VM "pop into" UCS

Manager as well.

Page 60: Nexus 1KV Operating in UCS environment

60

Task V9: Migrate vmk's (ESXi management and vMotion) to

the DVS

We will move the remaining functionality to the DVS. After this step, the "old vSwitch" is no

longer used at all, and in the real world you could delete it completely.

12. On the vSphere client, go to the Hosts and Clusters view, if not there already

13. Highlight one of your ESX servers (not a VM)

14. Click the Configuration tab

15. Click Networking (in the Hardware box)

16. At the top on the right change the "View:" to vSphere Distributed Switch if it

is not there already.

17. To the right of the label "Distributed Switch:MyVMFex" (just above the picture) click

the blue link Manage Virtual Adapters.

18. In the popup, click the blue Add link at the top

19. Select the radio button Migrate existing virtual adapters and click Next

20. Select the two appropriate port groups for the virtual adapters, as shown:

21. Click Next. Click Finish to confirm and close the popup

22. Repeat steps 2-10 precisely for your other ESX server

Task V10: Examine vMotion and the DVS

You will note that across vMotion, a VM retains its virtual port identity (and therefore all of its

characteristics) on the DVS.

4. On vSphere client, migrate that virtual machine to the other ESX server:

h. go to Hosts and Clusters view if not already there

i. highlight the VM name

j. Right Click Migrate…

k. Select the Change host radio button and Click Next

l. Choose the ESX server on which the VM is not currently running and Click Next

m. Confirm (High Priority) and click Next and Finish and watch the VM migrate

n. Back in UCS Manager VM tab you can look underneath the Virtual Machine

tree and see that the same VM has moved to the other server but still has the same

vNIC identity with the same characteristics. Of course physically this is now coming

from the virtual adapter card in the new server, but logically it is the same vNIC.

Page 61: Nexus 1KV Operating in UCS environment

61

Task V11: Implement High Performance Mode

Recall that a VM in high performance mode on the VM-FEX DVS will bypass the hypervisor

completely using DirectPath IO.

The request for high performance mode is in the port profile.

It should be considered just a request. Reasons that VM cannot implement the request are:

a. vmxnet3 must be supplied as the virtual adapter type. Some OS's may not have

drivers for this virtual adapter.

b. The VM must have a full memory reservation (reserves in advance from ESX the

full capacity of the VM memory). This is a setting you can change on the fly

(without rebooting VM).

c. Some OS's (WinXP) for example, just cannot implement the feature (to be

technical the feature requires "MSI" or MSI+" which are "message signaled

interrupts" which exist in the modern (Vista, Win7, 2008) Windows varieties but

not in XP.

1. On the vSphere client, go to the Hosts and Clusters view, if not there already

5. Highlight your RH1 VM

6. Right click Edit Settings

7. Highlight Network Adapter 1 on the left

8. Confirm (near upper right) that the adapter type is VMXNET 3

9. Confirm that DirectIO Path is Inactive (toward middle right)

10. In the same Settings window, click on the Resources tab

11. Highlight the Memory label on the left

12. Pull the Reservation slider as far right as it will go (1024MB). You should be here:

13. Click OK

Page 62: Nexus 1KV Operating in UCS environment

62

14. In UCS Manager, go to the VM tab and highlight the name of your data port profile as

shown (please use yours and not anyone else's).Make sure you are on the General tab

on the right.

15. Click the High Performance radio button. You should be here:

16. Click Save Changes and acknowledge

17. Back on vSphere client, edit the settings for RH1 again as in steps 1-6

18. Woo-hoo! DirectPath I/O should be Active (you won't notice any other difference, but

networking for this adapter is optimized and bypassing ESX completely. Click Cancel.

19. Perform a migration (vMotion) of RH1

20. If you look at the settings (as in steps 1-6) "fast enough", you may be able to see the

adapter transition back to standard mode (DirectPath I/O inactive) in order to perform the

migration and then back to optimized (active) once the migration is finished. It could 10

seconds or so after the migration to see it active again. Try the migration again if you

miss it the first time.

21. Repeat steps 1-18 for the Win2K08 VM. Make sure you slide the memory reservation as

far right as you can in step 12. (you should be successful in seeing DirectPath IO become

active)

22. Repeat steps 1-18 for the WinXP VM (DirectPath IO will not ever become active since

there is a prerequisite not satisfied by the OS).