Building a High-End Windows 2008 Database Server

Embed Size (px)

Citation preview

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    1/21

    Building a High-end

    Windows 2008Database Server

    As published at

    http://EvansBlog.TheBarrs.info/2009/08/building-high-end-windows-2008-database.html

    August 27, 2009

    By Evan A. Barr, CTO

    Project InVision International

    7428 Redwood Blvd; Suite 203

    Novato, CA 94945

    415-898-7300

    http://www.ProjectInVision.com

    Copyright Evan A. Barr 2009. All rights reserved.

    http://evansblog.thebarrs.info/2009/08/building-high-end-windows-2008-database.htmlhttp://evansblog.thebarrs.info/2009/08/building-high-end-windows-2008-database.htmlhttp://www.projectinvision.com/http://www.projectinvision.com/http://www.projectinvision.com/http://evansblog.thebarrs.info/2009/08/building-high-end-windows-2008-database.html
  • 8/10/2019 Building a High-End Windows 2008 Database Server

    2/21

    As I sit down to write this, it occurs to me that by the time anyone reads this, it will be partly obsolete.

    It has already been six months since I built the two identical servers described in this article (built in

    March 2009). However, this is still a worthwhile endeavor, since many of the techniques and

    technologies described will still be relevant for years to come and I expect these servers to remain in

    service for at least five years.

    This article is not nearly as detailed as it could be. There is so much that goes into the design of a high-

    end server that to be concise, I have generally listed only my conclusions at each step instead of

    providing a full explanation. And still the article is too long to publish on paper medium.

    I will discuss the parts that make up these servers, followed by the actual parts list and prices at the time

    of purchase. I will then get into the more important specifics of both the hardware and software

    assembly and then end with some pictures.

    Concepts

    Why Build Your Own

    I build my servers myself instead of buying from vendors like Dell because it is the only way to have

    complete control regarding the quality and speed of the components. Most vendors have to make

    compromises in these two areas because they cant count on getting the latest components in sufficient

    quantities.

    Although a system built by me will cost about 1/4 to 1/3 of a comparable Dell, it is not necessarily

    cheaper when my time is factored in. While it is true I would have to do a lot of research if I were

    buying machines like these from a vendor, building my own requires considerably more time, as there

    are always new technologies to learn and there are a plethora of competing products for every piece of

    the system.

    Clustering

    I started the design of these two identical servers with the intent to use live-live failover clustering. My

    definition of live-live is that my databases are distributed across the servers with each acting as the

    failover for the other. When I started looking into what is involved with maintaining a cluster, it didnt

    take long for me to decide that it is not worth it.

    The biggest issue with a cluster is in installing patches and other upgrades. It is not only more involved

    than with a single server, but you really need to test the upgrade on a parallel cluster. So, you arebuying four machines instead of two.

    Another issue I had is with the hardware for the external hard drives. A cluster requires that all servers

    in the cluster have a direct connection to the same set of data drives. In the past, this meant two

    compromises. The first is that you end up with a single point of failure at the very point where you are

    most likely to actually have a failure. The second is that even fiber channel connections are slower than

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    3/21

    using a direct connection. I say, in the past because Serial Attached SCSI (SAS) has the potential to

    solve both of these concerns.

    The beauty of SAS is that you can combine multiple lanes to achieve the desired transport speed

    regardless of whether the drives are internal or external. Theoretically, you should also be able to

    spread the RAID arrays across multiple enclosures and multiple controllers. By the time you are readingthis, it is possible that the right hardware has become available, but what I found was insufficient and

    very expensive.

    Given the above, you need to demonstrate extreme uptime requirements to justify the initial and

    ongoing costs of clustering. It is also worth noting that the database server tends to be the most stable

    part of an n-tiered system. The server that I replaced with these two new ones was on continuously for

    four years before it had its first unscheduled shutdown.

    I will still be distributing my databases across both servers. To provide for minimal downtime, each

    machine will get a local copy of the backups from the other machine in real time (using DFS) and scripts

    will be used to automate the process of restoring all backups to the alternate environment. We also do

    daily backups to another database server that resides 1,750 miles away in case we lose the entire

    datacenter (or the entire west coast).

    The Components

    Hard Drives

    With a database server, the hard drives are the key to performance. The memory and processor

    certainly play a part, but hard drives are incredibly slow when compared to everything else. There are a

    number of things that can be done to improve the speed. Some are obvious to anyone who has built a

    system before and others are very advanced.

    Spin Rate

    It is a given to use 15,000 RPM drives. A slower drive cant keep up with the transfer rate of its own

    interface and access times are greatly impacted.

    Type

    The first decision these days is whether to select SAS or parallel SCSI. I went with SAS because of the

    flexibility and because I believe that parallel SCSI has reached the end of its life cycle. At their fastest

    speeds, SAS allows for a longer cable length and can connect to significantly more drives per controller.

    In the future, when I decommission my current controllers, I can reuse them with a combination of SAS

    and SATA drives in a lesser system. Because SAS connecters are much smaller than SCSI, it is possible to

    put two ports on a single drive to allow a drive to be connected to two different controllers for

    controller failover at the drive level. SAS drives are also typically smaller, allowing you to fit more of

    them in a system.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    4/21

    RAID

    Speed and throughput are not the same thing, although they are related. Throughput is about how

    many transactions you can handle at once. Increasing throughput by using more drives or spreading

    transactions across multiple servers will actually decrease the speed if the number of simultaneous

    transactions is relatively small. But when you have many simultaneous transactions, you will get more

    speed by increasing the throughput. Smart controllers and sufficient caching can help reduce speed losswhile also improving throughput.

    Given the above, the next decision is how many drives I will need. This also requires that I decide on the

    number of drive volumes and RAID level for each. Since I am building a single-purpose server,

    specifically a SQL Server, I can really optimize here. I do have budget constraints to consider and I need

    to make sure I can get a case to handle my plan. I should note that drive capacity is only a minor

    consideration because even the smallest drives have ample capacity given the number of drives I want

    to use to boost my RAID throughput.

    There is much to consider with RAID, especially when you add in variants such as RAID 50 and 60. Most

    RAID levels have their pluses and minuses, but I find RAID 10 tends to give me the best balance between

    performance and the number of drives you can lose before data loss. The fact that you cant hold as

    much data as RAID 5 is not an issue for me since I am using many drives to increase my throughput. Also,

    losing a drive under RAID 5 absolutely kills performance until the drive is replaced and rebuilt.

    There is a little confusion with RAID 10 but modern controllers have eliminated the issue. Two schemes

    have been called RAID 10: You can have a stripe that is mirrored or you can have mirrors that are striped.

    The latter is significantly better because you can lose one drive on each mirror without any data loss,

    whereas the former cannot withstand the loss of more than one drive (you cant afford a second lost

    drive because one of the stripes will already be offline after the first loss). However, both schemes

    physically store the bits of data identically, so modern controllers use that fact to give the same level of

    protection to both.

    I need the following volumes:

    1)

    System (OS)

    2)

    Data

    3)

    Logs

    4)

    TempDB

    5)

    Backups

    I used a two-drive RAID 1 for the OS. I also used a single drive for the backups. This is because the

    backups are already redundant to the original data and are replicated to both servers, and the backups

    get backed up again each day to offline storage as well as a third server in another state.

    TempDB also doesnt require redundancy as it is rebuilt every time SQL Server starts and it only contains

    interim results and not permanent storage. You can speed up all of your databases by storing TempDB

    on its own volume. You can store TempDB in memory instead if you know what its maximum size will

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    5/21

    be. That is not the case for me, so I used a two-drive RAID 0 volume. One final consideration with

    TempDB is that you get optimized usage with SQL Server if you break TempDB into as many files as you

    have CPU cores. The ideal is to put each file on its own hard drive and skip RAID altogether. I will not

    have enough drives initially to do this.

    Data and Logs are accessed differently and should, therefore, be on separate volumes. Like TempDB,you can benefit by breaking databases into multiple files and putting each on its own volumes. I am

    supporting far too many databases for this to be possible. Instead, I created a six-drive RAID 10 for data

    and a four-drive RAID 10 for logs.

    Here is the summary using a total of 15 drives:

    Purpose RAID Type Drives

    OS 1 2

    Data 10 6

    Logs 10 4

    TempDB 0 2Backups none 1

    Putting 15 drives into a case creates a major constraint on what kind of case I can use, especially since I

    need the drives to be hot swappable and I plan to add more drives later.

    I will be describing more things to do to make the drives faster when I describe the hardware and OS

    settings below. If you read nothing else, make sure you read that section.

    Case

    I require a rackmount case that includes rails and can accommodate the large number of drives I need.

    It is possible to get a case that will hold 24 x 2.5 drives in only 2U, and I initially picked such a case.

    Unfortunately, the only acceptable 2.5 drives that fit my budget were unavailable from any sourceI

    could find. Everything else was at least double the price of the best 3.5 drives that had twice the

    capacity. I ended up with a 4U Super Micro case that can hold 24 x 3.5 drives. There are very few cases

    like this on the market. (See pictures at the end of this document.)

    This specific case puts the power and reset buttons along with the six indicator lights on the handle,

    since there is no room on the front for anything but the drive cages. There is also a slot on the back for

    a half-height CD/DVD drive. It is possible to use this case as a do-it-yourself external drive chassis by

    adding an optional chassis power card in place of the motherboard. I actually considered that when

    looking into clustering.

    Internally, there are two redundant hot swappable power supplies, five hot swappable fans, and an

    adjustable air shroud for concentrating air flow to the CPUs. At the datacenter, each server gets

    plugged into two separate power strips that are on a very redundant power grid (two power providers +

    batteries + two huge diesel generators with multiple fuel suppliers).

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    6/21

    A big advantage of the larger case is that I can use full-size expansion cards and it is easy to route cables

    in a way that maximizes air flow within the case. On the downside, this case is extremely heavy,

    requiring big rack rails that are too long to fit in common-sized enclosed racks. You will actually need to

    have at least 31" of clearance from the front post to the back door. My rack has only 29", which meant

    that I had to use a heavy-duty sliding shelf instead of the included rails (very aggravating).

    CPU

    The CPU needs to be selected before I can narrow down the motherboard possibilities. Not only do I

    need to consider different CPU families, but also speed within the families. I usually work my way up

    the speed ladder until I see a large jump in the price. Instead of paying too much for speed, it makes

    more sense to go with more CPU cores and more CPUs. SQL Server is very good at taking advantage of

    the extra processors.

    It wasnt until Intel came out with the CORE-2 Duo that I felt they were making fantastic CPUs (Ive used

    everything back to the 8086). The Xeon processors based on this and the new i7 core are a bargain

    compared to everything else that is out there. I ended up selecting the 80W version of the E5430, whichhas four cores, a 12 MB L2 cache, and runs at 2.66 GHz. If I had selected the 3 GHz model (E5450), it

    would have doubled the price, though none of my customers would ever notice that marginal speed

    difference. When I see such an irrational price jump, I know I have gone far enough up the

    price/performance ladder.

    The biggest downside to the E5400 series Xeons is that the front side bus is 1333 MHz instead of 1600.

    This means that I wont benefit from 800 MHz memory since the memory bus runs at 667 MHz.

    Motherboard

    The ChoicesI have used many different vendors for motherboards but during the last 10 years, most of my servers

    have been built on Super Micro and most of my desktops have been built on ASUS.

    For this server, I needed something that supports two of the CPUs I selected and although I would be

    adding separate SAS controllers, I still wanted SAS support on the motherboard. I also wanted an

    onboard graphics controller and as many x16 and x8 PCIe expansions slots as possible to handle my SAS

    controllers and future needs. Finally, I wanted lots of memory slots and a high maximum memory size

    since I know I will be adding more memory in two years. Other considerations are Gb Ethernet ports,

    bus speeds, memory pipelining, and the latest support chips for the E5400 Series CPU.

    I was surprised to find that Super Micro didnt have a board to fit my needs. There were compromises I

    could have made to still produce a successful server, but ASUS had what I wanted.

    The Decision

    The ASUS DSEB-D16 supports two Xeon processors, a maximum 1600 MHz front side bus, two second-

    generation X16 PCIe slots, and one X8 slot. There are also six SATA ports and eight SAS ports, which is

    plenty if I were using this for a lower-end server. I could have saved more than a couple thousand

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    7/21

    dollars on the final price tag by skipping the SAS controllers, using only four drives for data, moving OS,

    TempDB and Backup to SATA and using the onboard SATA and SAS ports. However, the drives are by far

    the biggest bottleneck in a database server, so I didnt consider thisoption. Its nice to know that these

    ports are there if I have a future need.

    Very important on this motherboard are the 16 memory slots on four independent channels for amaximum of 128 GB of RAM. It even supports memory redundancy (mirroring) and hot spares.

    Another useful feature is a set of four gigabyte Ethernet ports powered by two Intel controllers that can

    be teamed together in many configurations to support data load balancing and failover.

    Memory

    Since my motherboard uses four memory channels, I get the best performance by buying memory in

    sets of four DIMMs. I also need to make sure I provide sufficient memory for the number of CPU cores I

    have in order to maximize their speed. I like to start with 2 GB per core. Also, it is a requirement that I

    use fully buffered DIMMs (FB-DIMM) with error correction (ECC). Another consideration is that the

    individual DIMMs must be sufficiently capacious so that I have the maximum number of slots left for

    future expansion. SQL Server makes excellent use of extra RAM for caching both data and execution

    plans.

    With eight cores, I wanted to start with 16 GB, so I ended up selecting four 4 GB DIMMs. They are 667

    MHz DDR2 because of the CPU and motherboard I selected. When selecting which brand/model to

    choose, I look at quality, latency timings, and price.

    SAS Controller

    The best controllers will minimize the participation of the CPU for data access while maximizing the

    throughput of the attached drives. These controllers all have more RAID variations than I need since Iam using what is now considered the basics. The PCIe controllers mostly come with either an x4 or x8

    card bus. If selecting a card with more than eight data lanes (i.e. more than two mini SAS or SFF-8087

    connectors), then x8 is a requirement to prevent a bottleneck.

    I initially need to support 15 drives but I could have as many as 24 in the future. I also prefer to spread

    the drives across a minimum of two controllers, and the controllers must be identical.

    I ended up with two x8 cards with four mini SAS connectors on each for direct support of 32 drives. That

    may sound wasteful at first, but each card can support my initial needs by itself, which gives me a

    fallback if one of the cards dies. It is important to note that there is no standard for how to actually

    implement each RAID level, which means that a card from a different manufacturer, or even a different

    model from the same manufacturer, probably wont be able to readdata written by a different card. If

    you dont have a spare card lying around or at least spare capacity on existing cards, you can end up

    with a serious amount of downtime.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    8/21

    A valid alternative would be to get three cards with two connectors each to support a maximum 24

    drives. However, this actually costs more and I would need a fourth card as a spare once I maxed out

    my drive capacity.

    I try to avoid having spare controllers around, as they are very expensive and become obsolete quickly. I

    have had to discard too many unopened boxes.

    Power Supply, Video Card, DVD, Floppy Drive

    I am not installing a floppy drive. For the few instances where I need one, I use a USB floppy drive.

    Many functions that once required a floppy can now be done using a thumb drive.

    For one of the servers, I pulled the DVD reader out of an old laptop and left the other one bare. I

    temporarily connected a spare DVD reader during the OS installation. The balance of my installs were

    done over the network. I dont spend too much time on selecting a DVD reader for servers since they

    dont get used much.

    With workstations, I put a lot of effort into the selection of the video card. But with servers, I prefer thatthe video is integrated with the motherboard. The reason is because I rarely access the server directly

    after I finish the initial installations, and I want to leave the slot open for future expansion. Nearly all

    management is done remotely.

    Power supplies are a critical component and there is much I could say on this topic. But I wontdo that

    here because the case I selected came with two 900W, redundant, hot swappable power supplies. The

    power supplies slide out the back if you need to replace them while the server is running.

    Price

    Before I outline the major configuration steps, I am sure you are ready to see the actual parts list and

    prices. Keep in mind that Dell cant touch the performance of this system at twice the price, and that

    prices for computer components tend to go down over time. Dont be surprised if some of the parts are

    no longer available by the time you read this. I did not include prices for any software since different

    licensing schemes result in wildly different pricing. Dont stop reading here, because some of the

    upcoming configuration steps are critical for maximizing a servers performance.

    Part Description Qty Price (ea) Total

    Case SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount

    Server Case 900W (1 + 1) Redundant

    1 999.99 999.99

    Motherboard ASUS DSEB-D16/SAS Dual LGA 771 Intel 5400 SSI EEB

    3.61 Server Motherboard

    1 529.99 529.99

    CPU Xeon E5430 Harpertown 2.66 GHz 12MB L2 Cache LGA

    771 80W Quad-Core (BX80574E5430P)

    2 494.99 989.98

    Memory Kingston 8 GB (2 x 4 GB) 240-Pin DDR2 FB-DIMM ECC

    Fully Buffered DDR2 667 (PC2 5300) Dual Channel Kit

    Server Memory Model KVR667D2D4F5K2/8 G

    2

    Kits

    212.99 425.98

    Hard Drive Fujitsu MBA3147RC 147 GB 15000 RPM 16 MB Cache

    SAS - OEM

    15 179.99 2699.85

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    9/21

    SAS Controller Adaptec 2252700-R PCIe x8 SATA/SAS 31605 KIT 2 819.99 1639.98

    Total $7285.77

    TAX 528.22

    Shipping 68.81

    Grand Total $7882.80

    Hardware Setup

    This will not be a tutorial on how to assemble a computer. I would expect that someone building a

    server this expensive already has some build experience. Instead, I will just list some important steps

    specific to this system. Most readers may want to skip this section.

    Motherboard

    The heavy heat sinks that attach to the Xeon processors require that the processor is physically

    connected to the case and not just the motherboard. This is quite different from your typical

    workstation build and is why you cant put a Xeon motherboard into just any case. The case will have

    standardized screw holes and the processors will have mounting clips that go on top of the screw

    supports but under the motherboard.

    Because I will not be using the SAS controller on the motherboard, I put a short on pins 2-3 of jumper

    SAS_EN1.

    Since the CPUs are passively cooled (no directly attached fan), I put a short on pins 2-3 of jumperFAN_SEL1.

    Backplane

    The backplane on my case can control up to four fans. Since I chose to use the motherboard as the

    controller, I had to change some jumpers on the backplane. I connected pins 2-3 on jumpers 97, 98, 99,

    and 100.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    10/21

    RAID Controller

    Before modifying the RAID controllers BIOS, you should have the OS volume defined. All the other RAID

    volumes are easier to set up from within Windows.

    Always install the latest controller BIOS as the first step.

    To go into the RAID setup for this specific controller, hit {Ctrl}+{A} during bootup. For the OS, I created a

    RAID 1 partition that uses the entire drive with a stripe size of 256K. I will discuss the stripe size and

    other important settings when I get to the Windows portion of the RAID setup. I chose to enable the

    write and read caches because I know that I can rebuild the OS volume at anytime without meaningful

    data loss.

    Motherboard BIOS

    After installing the latest motherboard BIOS, the following settings should be changed. These changes

    are primarily to improve performance.

    MainFloppy A = Disabled

    Advanced

    Chipset Configuration

    North Bridge Configuration

    Demand Scrubbing = Enabled

    PCI/PnP Configuration

    Plug and Play O/S = Yes

    Onboard LANx ConfigurationOption ROM Scan = Disabled

    Peripheral Configuration

    Onboard Floppy Controller = Disabled

    Server

    Remote Access Configuration

    Remote Access = Disabled

    Boot

    Boot Device Priority

    DVD

    Volume 1

    {Disable network boot}

    Boot Settings Configuration

    Full Logo Display = Disabled

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    11/21

    Software Setup

    Installs

    After installing Windows 2008, I added the PowerShell and Backup components and nothing else. I will

    talk about the important setting changes in the next section.

    Next I installed the latest drivers for all my components. The motherboard has a number of built-in

    components, such as the LAN controller, that must be considered separately.

    With the drivers installed, I could then connect to the Internet to run the Windows update. I finished

    the installs with the Adaptec Storage Manager, which is the setup software that goes with my RAID

    controllers.

    Windows Settings

    There are many tweaks that I do to the standard Windows environment to make it more efficient for the

    way I work. There are also many common practices for improving general performance. I wont begoing over any of those here. I will mention just a few things that are too often skipped.

    Always disable services that arent needed, such as the Print Spooler and the WinHTTP Web Proxy Auto-

    Discovery service. Also, this kind of server will never go into hibernation, so delete the hibernation file

    with the following command:

    powercfg -h off

    The easiest way to apply the Windows license is with the following command:

    slmgr -ipk {PUT LICENSE # HERE}

    RAID Settings

    SQL Server is very efficient at reading and writing to hard drives. To maximize performance, you need to

    understand a little about how it works.

    SQL Server arranges data in groups of 8K pages called Extents. Each extent contains eight pages for a

    size of 64K. SQL Server can read one page at a time but always writes in full extents. This is actually

    more efficient since the lazy writer can try to group data together to get the most out of what is

    otherwise one of the slowest tasks.

    When reading/writing to a RAID array that uses striping, such as RAID 0, 5, or 10, you will get the best

    performance when extents do not cross disk boundaries. There are two things you have to do to ensurethis. The first is to make sure your stripe size is a multiple of the extent size. The other is to make sure

    the disk partitions are block aligned. I will discuss the second part in the next section.

    I tested using 64K stripes and 256K stripes with these controllers and different size blocks of data for

    both read and write. With 64K data blocks, I saw no meaningful difference in performance. But when I

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    12/21

    used larger blocks, the 256K stripes were faster. I believe these results are due to the fact that the RAID

    controller may be optimized for 256K stripes.

    My final setup is as follows:

    Controller 1

    OS: two drives, RAID 1, use all space, stripe size = 256, enable write and read caches.

    DATA: six drives, RAID 10, use all space, stripe size = 256, enable write and read caches.

    Controller 2

    LOGS: four drives, RAID 10, use all space, stripe size = 256, disablewrite cache.

    TEMPDB: two drives, RAID 0, use all space, stripe size = 256, enable write and read caches.

    BACKUP: one drive, use all space, enable write and read caches.

    To avoid putting the disk volumes on the wrong arrays, I create the above arrays one at a time while

    doing the next step.

    Disk Volumes

    There are two performance considerations here. The first has to do with block aligning the disk

    partitions as mentioned in the previous section. The second involves a very important technique called

    short stroking.

    Block Aligned

    It is critical for performance that a single written block go to only one hard drive and not get split into a

    partial block on one and a partial block on another. The calculation for where to align the first block is:

    Partition offset = Stripe size in bytes / Physical sector size of the hard drive in bytes

    All hard drives, unless stated otherwise, use a sector size of 512 bytes. As stated in the RAID Settings

    section above, we are using 256K strips. So, the calculation is:

    Offset = (256 * 1024) / 512 = 512

    I used the command line program DiskPart that comes with Windows to create my aligned partitions. I

    believe that, starting with Windows 2008, all partitions are automatically block aligned, but I will

    continue to use DiskPart because it is faster then loading Computer Management.

    Short Stroking

    Before I create my partitions, there is one more thing to consider that has a huge impact on

    performance. It is common knowledge that data on the outside tracks of a hard drive platter is read and

    written faster than on the inside tracks. This is because the track is longer and therefore holds more

    data even though the entire track takes the same amount of time to pass the read/write heads. Even

    more important, however, is that it takes a relatively long time to move the read/write heads from one

    track to another. The further away the track, the longer the delay before more data can be processed.

    Short stroking takes advantage of the above two facts. If you were to put all your data on only the

    outermost tracks, you would be using the fastest tracks while minimizing the movement of the heads.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    13/21

    The easiest way to do this is to break your disk arrays into multiple partitions. Each partition gets spread

    across all the disks in the array by the RAID controller and partitions are created from the outside of the

    platter first.

    For my database servers, I used a little over 10% for the first partition on the Data, Logs, and TempDB

    arrays. I also created a second partition on Data and Logs of 10% for databases that dont get usedmuch. The rest of the space I am throwing away. I can afford to use so little capacity because I have so

    many drives in my system with plenty of room to add more. The OS and Backup arrays use the entire

    disk.

    The Commands

    If you enterdiskpart at a command prompt, a console window will open where you can enter the

    diskpart commands. You can enter the command, list disk, at any time to get the disk numbers

    needed by the select command.

    To create two 30 GB partition on the LOGS array and assign drive letters L and M:

    select disk {disk#}create partition primary align=512 size=30720create partition primary align=512 size=30720select partition 1format label=Logs-Primary quickassign letter=Lselect partition 2format label=Logs-Secondary quickassign letter=M

    As a matter of practice, I use drive letters F and G for DATA, T for TempDB, and W for Backup. This

    leaves room for adding partitions in the future with consecutive letters if I need to, and makes it easy to

    remember where everything is when temporarily mapping to a drive from another server.

    The commands to create the two DATA partitions and the one TempDB partition are the same except

    for the label names and drive letters and I used 51200 for size to create 50 GB partitions for DATA.

    The commands for Backup are:

    select disk {disk#}create partition primaryformat label=Backup quickassign letter=W

    When I am finished:

    exit

    Final Drive Settings

    There is no need for file indexing on my non-OS drives, so I turn that off. Right-click on a drive letter,

    select Properties, and unselect Index this drive.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    14/21

    The size of the recycle bin is unacceptably large on all the drives and entirely not needed on TempDB. I

    accept that it is unlikely that a database is going to fit anyway, and most everything else is going to be

    small. So I use the following recycle bin sizes:

    OS and Logs: 1024 MB

    Data and Backup: 4607 MBTempDB: None

    The last thing is to create directories on the new volumes. I know I will be upgrading SQL Server at least

    once during the life of these machines and I will likely have overlap where I have multiple versions

    installed. To simplify life down the road, I start by including the version name in the directory names.

    Here are the DOS commands that I used:

    MD "F:\SQL2005 Data"MD "G:\SQL2005 Data"MD "L:\SQL2005 Transaction Logs"MD "M:\SQL2005 Transaction Logs"

    MD "M:\SQL2005 Error Logs"(Add shortcut to the above directory on L:\)

    MD "T:\SQL2005 TempDB"MD "W:\SQL2005 Backup"MD "W:\System Backups"

    Networking

    You may recall that my motherboard has four gigabyte Ethernet ports that support teaming. I took

    advantage of that by creating two teams of two ports each, with each team on two switches within my

    redundant private network. The two ports on the left use the Intel Pro-1000EB controller and the other

    two use the Intel Pro-1000PL. I chose not to cross controllers when creating my teams although I could

    have put all four ports on one team. Each team gets its own IP address and settings that are used

    instead of the IP and settings of the individual ports.

    SQL Server

    There are quite a few settings that need to be changed depending on your situation. Some are

    performance related but require testing to determine what is best for your specific server and databases.

    Covering those settings is an article in itself and will be skipped here. Many other settings are required

    in your situation because of the features you are using. There is no need for me to cover those since

    they will be forced upon you anyway. The only settings I will cover are ones that are completely

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    15/21

    optional yet I consider always required. I know others will believe I have omitted important things from

    this list, but their lists dont containall my settings either.

    Once you have everything set up, dont forget to install the latest service release and patches.

    SQL Server Service Account

    Always use the most restricted account possible for running the SQL Server Service. If youre not sure

    what permissions/restrictions to give such an account, start by using the Local System account. Never

    use an Administrator account (local or domain) except for troubleshooting. Whatever account you use, I

    will be referring to it as the SQL Server Service account below.

    Instant Initialization

    An important feature in Windows starting with 2003 (and XP) is called Instant Initialization. Usually

    when you create a file initialized to be a specific size, Windows will fill the file with binary zeros. This

    prevents random data from appearing within a file and also prevents a user from seeing secure data that

    had previously been deleted. Instant Initialization is an optional way to create a file and skips the

    zeroing process.

    SQL Server, starting with 2005, is able to request Instant Initialization for files. This is important because

    the files created by SQL Server tend to be huge and would otherwise take a considerable amount of

    time to create. Because of the security implications of being able to create a file that is not zeroed, only

    processes that have Perform volume maintenance tasks permission can request this.

    To give the SQL Server Service account this capability, open the Local Security Policy MMC and go to

    Local Policies..User Rights Assignment. You can then add the account to the Perform volume

    maintenance tasks entry. SQL Server will then automatically use Instant Initialization after it has been

    restarted.

    While you are at User Rights Assignment, you should also give the SQL Server Service account the ability

    to Lock pages in memorysince SQL Server does its own efficient memory management.

    TempDB

    As stated in the Hard Drives..RAID section near the top of this article, TempDB should be spread across

    multiple filesone per CPU core. I am using two 4-core CPUs, so I will use eight files. I will also move

    the TempDB from its default location. Here is the SQL script that accomplishes this using the drive

    letters and directory names I created above. The sizes I set are based on trends from our pre-existing

    servers. Ideally, the initial size should be set so that the files are not overly large yet never need to grow.

    Your sizes will likely be different. SQL Server will recreate the TempDB files every time it is started.

    use masterALTER DATABASE tempdb

    MODIFY FILE(NAME = templog,FILENAME = 'L:\SQL2005 Transaction Logs\templog.ldf')

    ALTER DATABASE tempdbMODIFY FILE(NAME = tempdev,

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    16/21

    FILENAME = 'T:\SQL2005 TempDB\tempdb.mdf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev2,FILENAME = 'T:\SQL2005 TempDB\tempdb2.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev3,FILENAME = 'T:\SQL2005 TempDB\tempdb3.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev4,FILENAME = 'T:\SQL2005 TempDB\tempdb4.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev5,

    FILENAME = 'T:\SQL2005 TempDB\tempdb5.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev6,FILENAME = 'T:\SQL2005 TempDB\tempdb6.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev7,FILENAME = 'T:\SQL2005 TempDB\tempdb7.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    ALTER DATABASE tempdbADD FILE(NAME = tempdev8,FILENAME = 'T:\SQL2005 TempDB\tempdb8.ndf',SIZE = 200MB, MAXSIZE = 2500MB, FILEGROWTH = 10MB)

    To verify the results of the script, you can run the command:

    select name, physical_name, state_desc from sys.master_files

    Backup Directory

    There are many SQL Server settings that can only be done in the registry. The path for the default

    Backups directory is one of those. Browse to:

    HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer

    Set the value for BackupDirectoryto:

    W:\SQL2005 Backup

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    17/21

    Default DATA and LOGS Directories

    Protect the integrity of your optimized disk arrangement by setting the default paths for new databases.

    Otherwise, you or someone else may forget to specify the correct locations at database creation, and

    then you will find yourself needing to take the database offline in order to move it.

    From Enterprise Manager, right-click on the Server and select Properties. Go to the Database Settingstab to set the directories.

    Other Files

    I want to use the directory M:\SQL2005 Error Logs\ for all error logs and I will need to tell SQL Server

    that I moved the master database to my DATA drive. I can do all of this using the SQL Server

    Configuration Manager. For the master database, I used the Configuration Manager to change the

    startup command line options for SQL Server.

    Open SQL Server Configuration Managerand select SQL Server 2005 Services in the left pane. Right-

    click on SQL Server ({instance name}) and select Properties. Go to the Advanced tab. For Dump

    Directory, I entered:

    M:\SQL2005 Error Logs\

    For Startup Parameters, I entered the following values for the -d, -e, and -l parameters:

    -dF:\SQL2005 Data\master.mdf;-eM:\SQL2005 Error Logs\ERRORLOG;-lF:\SQL2005 Data\mastlog.ldf

    To change the path of the SQL Server Agent error logs, right-click on SQL Server Agent ({instance name})

    in the right pane and select Properties. On the General tab, you can change the path to:

    M:\SQL2005 Error Logs\SQLAGENT.OUT

    Dont close the Configuration Manager yet as you will need it for the next step.

    Set Port for TCP/IP

    Of course I will be accessing my databases from other servers, which means I have to set up a firewall

    rule, and that means I need to set a fixed port for TCP/IP. Since I already have the Configuration

    Manager open from the previous step, I will set the port now. The default instance of SQL Server uses

    tcp port 1433which I will use in all my examples below.

    Select the server instance under SQL Server 2005 Network Configuration. In the right pane, right-click

    on TCP/IP and select Enable if necessary. Right-click on TCP/IP again and select Properties. On the IPAddress Tab, scroll down to IPALLat the bottom and set TCP Port to 1433.

    After you close the Configuration Manager, you will need to restart the SQL Server service.

    Whatever port number you use, you will need to include it as part of your connection strings if you do

    not start the SQL Browser service. I recommend keeping the service disabled in production

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    18/21

    environments to reduce the attack surface and reduce the number of running programs. Your

    connection strings will now use {ServerName},1433 instead of {ServerName}\{InstanceName}.

    Enabling Remote Administration

    The first installed instance of SQL Server uses tcp port 1434for remote administration. You will not only

    need to open this port on the firewall, but will also need to enable it in SQL Server.

    You can use either the Surface Area Configuration tool or Management studio. Below are the

    instructions using the Surface Area Configuration tool.

    Open SQL Server Surface Area Configuration and click on the link for Surface Area Configuration for

    Services and Connections. Expand your instance in the hierarchy and go to Database Engine..Remote

    Connections. Set Local and remote connectionswith Using TCP/IP only.

    Firewall

    You need to allow incoming connections to port 1433 and 1434. Below are instructions for the built-in

    Windows 2008 firewall using minimal access.

    Run WF.mscor open the Windows Server Manager and go to Configuration..Windows Firewall with

    Advanced Security..Inbound Rules. Add a new rule ofLocal specific ports: TCP 1433, 1434; Remote: all

    ports; Profiles: Domain. You can add other restrictions, such as limiting the allowed IP addresses, if your

    situation allows.

    What Else

    There are really a hundred other things that go into the setup of a server such as backup schedules and

    account policies. Much of what is left falls under common company standards and I feel comfortable

    skipping all that in this article. It is easy to see why larger companies assign a team to set up an

    individual server and pay far too much for their hardware. For most humans, this is all just too much to

    know.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    19/21

    Pictures

    Front

    This is how you get 24 hot swappable 3.5 drives into only 4U. Notice the Power and Reset buttons

    along with the indicator lights below the left case handle.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    20/21

    Above

    There are two SAS controllers (A), each with 4 ports (1-4). Each port has 4 lanes to control 4 drives.

    Notice that only ports 3 and 4 are in use on each controller. The SATA/SAS cables (Bred) and SGPIO

    cables (Bgray) are jumbled together at the controllers so that I can fan each port across a single row of

    drives while tying together the cables that go to each column. This makes sense since there are 4 drives

    per row and 4 lanes per port. If the cables had been just a little longer, I could have instead had them all

    go together at the top of the picture and avoided the ratsnest. You can see that the cables are going

    through a slot underneath the 3 hot swappable fans (E). There is just a little space between the fans and

    the backplane (F) but you can see that I managed to keep the area clear of cables. This is important to

    maximize airflow. The hard drives are on the other side of the backplane and out of this picture. Thecase comes with a large air shroud (C) that concentrates airflow to the 2 processors under the giant heat

    sinks (D). Two more hot swappable fans pull air out the back. You can see one of the power supplies

    going along the depth of the case (G). The other one is next to it, outside the picture. Next to the

    power supply are the power cables, which you can see are completely outside of the airflow due to the

    shroud. The DVD reader (H) opens out the back of the case and you can see I was able to route its cable

    over the air shroud so that it is entirely out of the airflow.

  • 8/10/2019 Building a High-End Windows 2008 Database Server

    21/21

    Top Front

    For a little perspective, you can see the hard drives in the front and the tops of the three hot swappable

    fans behind the backplane.