NetApp Storage Lab Blog Posts · NetApp Storage Lab Blog Posts: Introduction: Before undertaking...

Preview:

Citation preview

NetApp Storage Lab Blog Posts:

Introduction:

Before undertaking the storage labs I had to set up a basic vsim storage appliance that I could make

use of for the various tasks in modules 2 through 6.

We were provided with a “gold copy” of a NetApp 7-Mode Data ONTAP appliance virtual machine

(VM) that we could copy as many times as necessary then add it to our VM inventory. Due to the

various customisations already made by NetApp direct cloning is not possible and a few

modifications have to be carried out before use. Specifically, a virtual disk must be removed and a

new one added, also some a couple of extra lines need to be added to the configuration parameters.

Now that the simulator was ready for use I also needed to perform a clean configuration and

initialise all disks. I then replaced the 2x14x1 GB Virtual Disks with 2x14x9GB Disks and then

performed a second wipe. In the end we are left with two shelves of disks with a total of 250gb of

space.

At this stage I also set up the rest of my VM’s and local domain (callum.local) I would be using for the

remainder of the storage labs and subsequent VMware labs. This included:

Windows 8.1 box – I installed on command system manager on this and also a copy of putty for

managing my simulator. I also assigned a static IP and joined this VM to my domain.

2 Windows Server 2012 r2 boxes - One as a primary domain controller, DC01, and one to be later

used as a Vcenter server (also joined to callum.local).

Linux box – this was based on Ubuntu Server and would be used primarily for NFS shares.

At this time I also mounted a network drive on all the windows based VM’s. This required using my

TALOS credentials to access the share and not the default callum.local domain credentials.

Finally, I made sure that my simulator was on the same distributed port group as all my other VM’s

and that interface e0A corresponding to the correct NIC and had the correct IP (172.16.61.90). I also

ran the netdiag command to confirm everything was at it should be.

Module 2:

I added my vSIM to the system manager by entering its IP when adding a node. I then had to enable

TLS before I could login to manage my vSIM as this is disabled by default.

I then played around with putty, entering some basic commands to familiarise myself with the

commands used by NetApp. This included using the help file.

For the final part in this module I then configured my vSIM with the correct DNS settings and domain

settings.

Module 3:

Task 1-3

The next task involved exploring disks and aggregates - we first ran the storage config wizard. I didn’t

use it though as I was required to do so manually to avoid auto assigning all disks. Available space

was 15.82gb of total space of 15.82gb.

I then created an aggregate manually via the command line interface (CLI). The command suggested

by the lab documentation "create aggr aggr2 -B 64 4" Was not a recognised NetApp command. -B

does not register under any documentation as being applicable to creating an aggregate and

specifying 64 bit is also not allowed at this stage. I found I had to use “create aggr aggr2 -T FCal 4” to

create an fcal 4 disk aggregate like we did in system manager.

Task 4:

Data disk ID= v5.17 - no failed

Here I failed a disk on purpose which then copied its content onto a spare.

Vol status just gives volume info and sysconfig gives full system info.

Task 5:

In this task I simply added a disk to an aggregate.

Task 6:

Here I used On Command System Manager to create a flexible volume in the aggr1 aggregate.

Task 7:

In this task I increased the size of NASvol then checked the status and size of the volume via the CLI. I

also then resized the volume via the CLI.

Task 8:

Here I created a qtree, NASqt1, via the system manager and then did the same via the CLI (NASqt2).

Task 9:

After creating the 2 qtree’s in the previous task I then deleted them. One via the system manager

and the other via the CLI.

Task 10:

This task simply required deleting the volume, NASvol, via the system manager. In order to delete a

volume it has to be put offline first.

Module 4:

Task 1 – licencing NFS:

Task 2:

Used system manager to export a volume.

Task 3:

In this task I created a volume (NFSvol) and exported it. All via the CLI.

Task 4:

Task 5 and 6:

These two tasks simply required me to licence and configure the CIFS service.

Task 7 and 8:

In these next tasks I created a windows domain user (which I had already done when I initially

created my local domain). I then created a NTFS qtree (cifs_tree1) and shared it via the CLI.

Task 9:

Here I used the Microsoft Computer Management tool to create a share (cifs_tree3).

Task 10:

Here I mapped both cifs_tree1 and 3 to drive letters X and Y.

Task 11:

In this task I made sure that my domain user had full control permissions for the previously created

cifs shares.

Task 12:

Task 13:

In this tasks I only configured the Server Message Block 2.0 Protocol (SMB 2.0) on both the storage

and windows client.

Task 14:

Here I terminated the running CIFS session and stopped and restarted the CIFS service via the system

manager.

Module 5:

Task 1:

In the first task I used the system manager to create a snapshot copy, SnapA.

Task 2:

This task required us to restore a windows file from a snapshot copy. But before I could do this

various options had to be enabled on the vSIM. The following picture illustrates this.

Restoring the file itself:

*Note - I did run into an issue regarding permissions in this task. For some reason the owner on the

snapshot copy folder had changed and I was unable to change ownership and thus give any of my

users the correct permissions to have full control. In the end I rebuilt my vSIM and had no issues at

all the second time around.

Task 3:

Here I deleted a Linux file and then restored it using a snapshot copy. I did have to dig a little bit to

find the snapshot folder here. A simple ls –a command was all I used to explore the folder structure

via my Linux terminal.

Task 4:

Next I explored the various configuration options available for snapshot copies via the system

manager. Total is 100kb and used is 100kb.

Task 5:

The final task in this module required me to restore a file via the windows previous version tool.

Please view this video for more detail and a full tutorial on this module:

https://www.youtube.com/watch?v=34F4hOlVlRE

Module 6:

Task 1:

Similar to the start of other modules, the first task only required licencing and starting the iSCSI

service.

Task 2:

In this task I configured my domain controller (DC01) to be an iSCSI initiator.

Task 3:

Here I created a LUN and attached it to an iGroup via the system manager.

Successful LUN creation:

Attached to an iGroup:

Task 4:

After previously creating LUN’s I then mapped said LUN’s as network drives on my DC01. To make

sure everything was working correctly I them wrote a text file to each LUN.

Task 5:

Module 7:

Task 1:

There still seems to be an issue with the "-B" command. The vsim does not recognise it when

attempting to create an aggregate and there appears to be no option for 32bit aggregates via on

command sys manager. Carried on with a 64bit aggregate. Space available is 15gb and total is 17gb

and there is no snapshot reserve.

Task 2:

Here I had issue writing to FILEvol in linux. CHanged permissions in exports to allow all hosts to

read/write to fix. I also could not set overwrite protection as A)FULLfile is located at

/vol/FILEvol/FULLfile not /vol/FULLfile and B) because there is not enough space on the drive as we

filled it with FULLfile. Same for SPARSEfile.

Task 3:

In this task I configured deduplication on a volume, specifically, NASvol.

Task 4:

In this task I set quotas and exceeds the set limits, both hard a soft.

I do not see anything if the hard quota is exceeded on the CLI but the client windows box tells me

there is not enough space as it should because I was attempting to copy a 24mb file which is 2 times

that of the 12mb hard limit. 14.3mb more data is needed.

The CLI does tell me when the soft limit is exceeded however, whereas the client allows the copy to

take place.

Setting an initial quota:

Quota exceeded:

Quota exceeded (soft limit):

Quota exceeded (both soft limit and threshold):

Task 5:

In the penultimate task of this module I created a quota report via the CLI.

Task 6:

Finally, I used system manager to edit the user defined quotas for NASvol. After implementing the

changes I had made by using the resize command I then read the /etc/quotas file to verify the new

current values.

Recommended