MPS 3 Advanced Concepts

Embed Size (px)

Citation preview

Advanced Concepts Guide MetaFrame Presentation Server for Windows Version 3.0

MetaFrame Presentation Server 3.0 for Windows MetaFrame Access Suite

Copyright and Trademark Notice Information in this document is subject to change without notice. Companies, names, and data used in examples herein are fictitious unless otherwise noted. Other than printing one copy for personal use, no part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without the express written permission of Citrix Systems, Inc. Copyright 2001-2004 Citrix Systems, Inc. All rights reserved. Citrix, ICA (Independent Computing Architecture), MetaFrame, MetaFrame XP, NFuse, and Program Neighborhood are registered trademarks, and SpeedScreen is a trademark of Citrix Systems, Inc. in the United States and other countries. RSA Encryption 1996-1997 RSA Security Inc., All Rights Reserved. Trademark Acknowledgements Adobe, Acrobat, and PostScript are trademarks or registered trademarks of Adobe Systems Incorporated in the U.S. and/or other countries. Apple, LaserWriter, Mac, Macintosh, Mac OS, and Power Mac are registered trademarks or trademarks of Apple Computer Inc. DB2, Tivoli, and NetView are registered trademarks, and PowerPC is a trademark of International Business Machines Corp. in the U.S. and other countries. HP OpenView is a trademark of the Hewlett-Packard Company. Java, Sun, and SunOS are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. Solaris is a registered trademark of Sun Microsystems, Inc. Sun Microsystems, Inc has not tested or approved this product. Portions of this software are based in part on the work of the Independent JPEG Group. Portions of this software contain imaging code owned and copyrighted by Pegasus Imaging Corporation, Tampa, FL. All rights reserved. Macromedia and Flash are trademarks or registered trademarks of Macromedia, Inc. in the United States and/or other countries. Microsoft, MS-DOS, Windows, Windows Media, Windows Server, Windows NT, Win32, Outlook, ActiveX, Active Directory, and DirectShow are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Netscape and Netscape Navigator are registered trademarks of Netscape Communications Corp. in the U.S. and other countries. Novell Directory Services, NDS, and NetWare are registered trademarks of Novell, Inc. in the United States and other countries. Novell Client is a trademark of Novell, Inc. RealOne is a trademark of RealNetworks, Inc. SpeechMike is a trademark of Koninklijke Philips Electronics N.V. Unicenter is a registered trademark of Computer Associates International, Inc. UNIX is a registered trademark of The Open Group. All other trademarks and registered trademarks are the property of their owners. Document Code: May 24, 2004 3:13 pm (GG)

Contents

3

ContentsChapter 1 IntroductionDocument Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Getting More Information and Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 2

Planning Your DeploymentHardware Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Operating System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Planning User Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Chapter 3

Facilitating Server Farm CommunicationUnderstanding Zones and Data Collector Communication. . . . . . . . . . . . . . . . . . . 25 Bandwidth Requirements for Server Farm Events . . . . . . . . . . . . . . . . . . . . . . . . . 31 Implementing the Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Multihoming Servers in the Farm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Using DNS Servers for Client to Server Connections. . . . . . . . . . . . . . . . . . . . . . . 53

Chapter 4

Deploying MetaFrame Presentation ServerInstalling and Upgrading to MetaFrame Presentation Server 3.0 . . . . . . . . . . . . . . 57 Rapid Deployment of MetaFrame Presentation Server. . . . . . . . . . . . . . . . . . . . . . 60 Other MetaFrame Presentation Server Deployment Scenarios. . . . . . . . . . . . . . . . 66 Deploying MetaFrame Presentation Server Clients . . . . . . . . . . . . . . . . . . . . . . . . 73

Chapter 5

Managing Server FarmsManagement Consoles for MetaFrame Presentation Server. . . . . . . . . . . . . . . . . . 87 Administering Server Farms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Installation Manager Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Resource Manager Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Network Manager Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Using Visual Basic Scripts to Access the WMI Provider . . . . . . . . . . . . . . . . . . . 111

4

Advanced Concepts Guide

Chapter 6

Deploying, Publishing, and Configuring ApplicationsPublishing in Domains with Thousands of Objects . . . . . . . . . . . . . . . . . . . . . . . 129 Content Redirection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Enhanced Content Publishing and Redirection with the Web Interface. . . . . . . . 131 Configuring SpeedScreen Browser Acceleration for Published Applications . . . 136 Media Formats and Network Types Supported by SpeedScreen Multimedia Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 SpeedScreen MultiMedia Acceleration Configuration Options . . . . . . . . . . . . . . 144 Adding and Removing Users from Published Applications Using VBScript and MetaFrameCOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Using Installation Manager to Deploy Windows Installer Packages . . . . . . . . . . 150

Chapter 7

Optimizing MetaFrame Presentation Server PerformanceNetwork Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Server Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Applications Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Disk Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Memory Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 User Settings Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Audio Recording Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Client Audio Mapping Virtual Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 ICA Priority Packet Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Load Balancing Servers Running the Web Interface . . . . . . . . . . . . . . . . . . . . . . 180

Chapter 8

Security Issues and GuidelinesSecuring Your Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Security Considerations for the Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Network Security Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Configuring Proxy/Firewall Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Using Smart Cards with MetaFrame Presentation Server. . . . . . . . . . . . . . . . . . . 200 Using Citrix Products in a Wireless LAN Environment . . . . . . . . . . . . . . . . . . . . 207 Deploying the Java Client using the Web Interface . . . . . . . . . . . . . . . . . . . . . . . 209

Contents

5

Chapter 9

TroubleshootingTroubleshooting the IMA Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Troubleshooting Novell Directory Services Integration . . . . . . . . . . . . . . . . . . . . 216 SQL Database Replication Troubleshooting Tips. . . . . . . . . . . . . . . . . . . . . . . . . 219 Resource Manager Troubleshooting Q&A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Trusts and User Group Access Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Other Troubleshooting Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Collecting Information for Citrix Technical Support . . . . . . . . . . . . . . . . . . . . . . 227

Chapter 10

Using MetaFrame Presentation Server with Novell Directory ServicesFarm Layout and System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Setting Up MetaFrame Presentation Server for Use in NDS . . . . . . . . . . . . . . . . 233 Installing the Novell Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Using ZENworks to Simplify User Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Using MetaFrame Presentation Server in the NDS Environment. . . . . . . . . . . . . 242 Enabling NDS Usage in the Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 NDS Usage with the MetaFrame Presentation Server Client . . . . . . . . . . . . . . . . 247 Tips and Techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250

Chapter 11

Creating and Replicating PrintersPrinter Mapping Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Optimizing Printer Creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Updating Printer Properties When Users Log On . . . . . . . . . . . . . . . . . . . . . . . . . 254 Printer Driver Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

Chapter 12

Replicating Data Stores Running Microsoft SQL ServerPreparing for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Replicating a Data Store in SQL Server 2000. . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Replicating a Data Store in SQL Server 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Pointing Servers to the Replicated Data Store. . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

6

Advanced Concepts Guide

Appendix A

Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 DSVIEW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 MSGHOOK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 QPRINTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 QUERYDC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 QUERYDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 QUERYHR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 SCCONFIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 LMNEWLOG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 LMSWITCH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

Appendix B Appendix C

Tested Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 IMA Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Event Log Warning and Error Messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 IMA Subsystem Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Presentation Server Console Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Resource Manager Billing Error Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336

Appendix D

Registered Citrix Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

CHAPTER 1

Introduction

Advanced Concepts Guide for MetaFrame Presentation Server for Windows 3.0 this book is a collection of best practices, tips, and suggestions for effectively using MetaFrame Presentation Server. To get the most from this guide, you should be familiar with the concepts and configuration procedures in the MetaFrame Presentation Server Administrators Guide and the additional documentation for MetaFrame Presentation Server componentsall of which are available in the Document Center included on your MetaFrame Presentation Server CD (see below for details). Additional information is available from the MetaFrame Presentation Server readme file. See the MetaFrame Presentation Server Client readme files for known issues and work arounds. For further information or to get white papers about some of the topics discussed in this document, visit the Citrix Web site at http:// www.citrix.com. Note All terminology, product references, and recommendations are subject to change without notice.

8

Advanced Concepts Guide

Document ConventionsMetaFrame Presentation Server documentation uses the following typographic conventions for menus, commands, keyboard keys, and items in the program interface:Convention Boldface Italics Meaning Commands, names of interface items such as text boxes, option buttons, and user input. Placeholders for information or parameters that you provide. For example, filename in a procedure means you type the actual name of a file. Italics also are used for new terms and the titles of books. The Windows system directory, which can be WTSRV, WINNT, WINDOWS, or other name you specify when you install Windows. Text displayed in a text file. A series of items, one of which is required in command statements. For example, { yes | no } means you must type yes or no. Do not type the braces themselves. Optional items in command statements. For example, [/ping] means that you can type /ping with the command. Do not type the brackets themselves. A separator between items in braces or brackets in command statements. For example, { /hold | /release | /delete } means you type /hold or /release or /delete. You can repeat the previous item or items in command statements. For example, /route:devicename[,] means you can type additional devicenames separated by commas.

%SystemRoot%Monospace

{ braces }

[ brackets ]

| (vertical bar)

(ellipsis)

Chapter 1 Introduction

9

Getting More Information and HelpThis section describes how to get more information about MetaFrame Presentation Server and how to contact Citrix.

Accessing Product DocumentationThe documentation for MetaFrame Presentation Server includes online documentation, known issues information, integrated on-screen assistance, and application help. Online documentation is provided as Adobe Portable Document Format (PDF) files. Online guides are provided that correspond to different features of MetaFrame Presentation Server. For example, information about the Web Interface is contained in the Web Interface Administrators Guide. Use the Document Center to access the complete set of online guides. Be sure to read the Readme.htm file in the \Documentation directory of the product CD before you install MetaFrame Presentation Server or when troubleshooting. This file contains important information that includes lastminute documentation updates and corrections. In many places in the MetaFrame Presentation Server user interface, integrated on-screen assistance is available to help you complete tasks. For example, in the Access Suite Console, you can position your mouse over a setting to display help text that explains how to use that control. Online help is available in many components. You can access the online help from the Help menu or Help button. The Advanced Concepts Guide is available for download as a PDF file from the Support page of the Citrix Web site. Follow the links for Support > Knowledge Center > Product Documentation to navigate to the download page for this document.

Important To view, search, and print the PDF documentation, you need to have the Adobe Reader 5.0.5 or later with Search. You can download Adobe Reader for free from Adobe Systems Web site at http://www.adobe.com/.

10

Advanced Concepts Guide

Accessing the Document CenterThe Document Center provides a single point of access to product documentation and enables you to go straight to the section in the documentation that you need. It also includes: A list of common tasks and a link to each item of documentation. A search function that covers all the PDF guides. This is useful when you need to consult a number of different guides. Cross-references among documents. You can move among documents as often as you need using the links to other guides and the links to the Document Center.

You can access the Document Center from your product CD or install it on your servers. To install the Document Center, select the option from the MetaFrame Presentation Server Autorun screen. To start the Document Center 1. From your product CD, navigate to the \Documentation folder. or On a server on which you installed the Document Center, select Documentation from the Citrix program group on the servers Start menu. 2. Open document_center.pdf. The Document Center appears. If you prefer to access the guides without using the Document Center, you can navigate to the component PDF files using Windows Explorer. If you prefer to use printed documentation, you can also print each guide using Adobe Reader. Note The Advanced Concepts Guide is not part of the Documentation Center at this time.

Chapter 1 Introduction

11

Getting Service and SupportCitrix provides technical support primarily through the Citrix Solutions Network (CSN). Our CSN partners are trained and authorized to provide a high level of support to our customers. Contact your supplier for first-line support or check for your nearest CSN partner at http://www.citrix.com/support/. In addition to the CSN channel program, Citrix offers a variety of self-service, Web-based technical support tools from its Knowledge Center at http://support.citrix.com/. Knowledge Center features include: A knowledge base containing thousands of technical solutions to support your Citrix environment An online product documentation library Interactive support forums for every Citrix product Access to the latest hotfixes and service packs Security bulletins Online problem reporting and tracking (for customers with valid support contracts)

Another source of support, Citrix Preferred Support Services, provides a range of options that allows you to customize the level and type of support for your organizations Citrix products.

Subscription AdvantageSubscription Advantage gives you an easy way to stay current with the latest server-based software functionality and information. Not only do you get automatic delivery of feature releases, software upgrades, enhancements, and maintenance releases that become available during the term of your subscription, you also get priority access to important Citrix technology information. You can find more information on the Citrix Web site at http://www.citrix.com/services/ (select Subscription Advantage). You can also contact your Citrix sales representative or a member of the Citrix Solutions Network for more information.

12

Advanced Concepts Guide

Customizing MetaFrame Presentation ServerThe Citrix Developer Network (CDN) is at http://www. citrix.com/cdn/. This openenrollment membership program provides access to developer toolkits, technical information, and test programs for software and hardware vendors, system integrators, licensees, and corporate IT developers who incorporate Citrix computing solutions into their products. Most of the operations that you can perform using the MetaFrame Presentation Server user interface can also be scripted by using the Citrix Software Development Kit (SDK). The SDK also lets programmers customize most aspects of MetaFrame Presentation Server. The SDK is available from http://www.citrix.com/cdn/.

Education and TrainingCitrix offers a variety of instructor-led training and Web-based training solutions. Instructor- led courses are offered through Citrix Authorized Learning Centers (CALCs). CALCs provide high-quality classroom learning using professional courseware developed by Citrix. Many of these courses lead to certification. Web-based training courses are available through CALCs, resellers, and from the Citrix Web site. Information about programs and courseware for Citrix training and certification is available from http://www.citrix.com/edu/.

Master page: First Cn-ChapterNumber

CHAPTER 2

Planning Your Deployment

This chapter includes recommendations for server hardware and operating system configuration and for providing adequate server capacity for user sessions. Before installing and deploying MetaFrame Presentation Server, read and consider these sections.

Hardware ConfigurationCitrix recommends the following hardware configuration options to improve the performance of MetaFrame Presentation Server.

General Recommendations Employ RAID ArraysBecause hard drives are the most common point of hardware failure in multi-processor configurations, Citrix recommends a RAID (Redundant Array of Independent Disks) setup. See the MetaFrame Presentation Server Administrators Guide for more information regarding available RAID configurations. If RAID is not an option, a fast SCSI 2, 3, or Ultra 160 drive is recommended. Faster hard disks are inherently more responsive and may eliminate or curtail disk bottlenecks. Currently 15,000 RPM hard drives are the most commonly deployed Install Multiple Disk ControllersFor quad and eight-way servers, install at least two controllers, one for operating system disk usage and the other to store applications and temporary files. Isolate the operating system as much as possible; do not install applications on the controller where the operating system is installed. Distribute hard drive access load as evenly as possible across the controllers.

Header (on master page) 14 Advanced Concepts Guide

Provide Adequate Disk Space for User ProfilesPartition and hard drive size depends on both the number of users connecting to computers running MetaFrame Presentation Server and the applications running on the server. Running applications such as those in the Microsoft Office suite can result in user profile directory sizes of hundreds of megabytes. Large numbers of user profiles can use gigabytes of disk space on the server. You must have enough disk space for these profiles on the server. Note Roaming profiles and permanent user data should be stored on a centralized file server, Storage Area Network (SAN), or Network-Attached Storage (NAS) that can adequately support the environment. In addition, this storage medium should be logically located near the servers so that minimal router hops are required and logon times are not unnecessarily increased.

Enable Disk-Write CachingUsers may experience performance improvements if disk-write caching is enabled on a servers RAID controller. For example, enabling disk-write caching can help mitigate problems users experience when many users log on at the same time.

Server RedundancyWhen planning the hardware configuration of your server farm, consider the following precautions: At least one backup server should be available in the event of a production server failure. It is typical for some organizations to plan for as much as 25% redundancy within the production environment. Servers that enable access to MetaFrame Presentation Server, such as Web Interface, Secure Gateway, secure ticket authority and MetaFrame Secure Access Manager servers, serve as single points of failure if only one server is deployed with a given functionality. Deploy two or more servers to service each function to ensure continued access to the server farm.

Data Store Hardware GuidelinesThe performance of various events within your MetaFrame Presentation Server 3.0 environment can be improved by increasing the CPU power and speed of the database server that hosts the data store. Testing shows that adding processors to the data store server can dramtically improve response time when multiple simultaneous queries are being executed. If the environment has large groups of servers coming online frequently, the additional processors will service the requests faster.

(on master page) Header Chapter 2 Planning Your Deployment

15

However, with serial events, such as installing or removing a farm server, the additional processors show lower performance gains. To improve the processing time for these types of events, increase the processor speed of the data store hardware platform. Note The response time of other farm events (such as recreating the local host cache or replicating printer drivers to all farm servers) is more closely related to the size of the farm rather than to the response time of the data store.

Objects in the Data StoreA major factor in determining the hardware needed to ensure proper performance of the data store is the number and size of object records in the data store. When you create an object in the Presentation Server Console by performing an action such as publishing an application or adding a MetaFrame administrator, you create a record for that object in the data store database. The objects include the following: Applications Administrators Folders Installation Manager groups Installation Manager packages Load evaluators Printers Printer drivers Policies Resource Manager metrics Servers

Some objects, such as applications and servers, create multiple entries in the data store. As the number of entries in the data store grows, the time required to search and retrieve the entries also grows. As servers are added to the farm, the data store must service more requests. Consequently, plan the data store hardware platform based on the total number of servers you plan to include in the farm. For more information about choosing a database for the data store, see the MetaFrame Presentation Server Administrators Guide.

16

Advanced Concepts Guide

The Size of Data Store ObjectsThe following table shows the estimated sizes of object records created in the data store for various tasks performed in the Presentation Server Console. A SQL Server 2000, Service Pack 3 database was used. Note The following measurements should be considered only as guidelines, because the size of an objects entries in the data store depends on many factors.

Task Publish an application (for example, Wordpad) Create a blank policy Configure all rules and assign policy to domain users group Create a Resource Manager application configured for one server Import a network print server/printer driver Add printer driver Add Resource Manager metric for one server (Citrix MetaFrame Presentation Server/data store bytes written/sec) Add one domain administrator as a MetaFrame administrator Configure Installation Manager properties (account, path) Add an Installation Manager package Create an Installation Manager package group Add a server folder Add an application folder Create a Load Manager load evaluator with one evaluation rule (server user load) Create an Installation Manager server group containing one server Add a server to the farm

Size of Object Record Created in Data Store (Bytes) 11338 7122 986 7467 2183 2485 1808 3164 1635 24275 4225 1188 1191 1786 1113 81891

Chapter 2 Planning Your Deployment

17

Operating System ConfigurationThe following Windows server operating system configuration options are recommended to improve the performance, stability, and security of your MetaFrame Presentation Server implementation.

General Recommendations All partitions must be in Windows NT File System (NTFS) format. NTFS allows security configuration, better performance, fault tolerance, and also saves disk space usage because NTFS partitions have small and constant cluster sizes (the minimum size is 4KB). File Allocation Table (FAT) partitions require much larger cluster sizes because the size of the partition increases (with the minimum being 32KB). More space is wasted on FAT partitions because the file system requires an amount of physical disk space equal to the cluster size of the partition used to store a file, even if the file is smaller than the cluster size. For more information about cluster sizes of FAT and NTFS partitions, see Microsoft Knowledge Base article Q140365. If possible, install only one network protocol on the server. This practice frees up system resources and reduces network traffic. If multiple protocols are needed, set the bind order so that the most commonly used protocol is first. When working with Windows Terminal Services, increase the registry size to accommodate the additional user profile and applications settings that are stored in the registry. On a single-processor server, you need to reserve at least 40MB for the registry. Reserve at least 100MB on quad and eight-way servers. You can also increase performance by properly tuning the page file. For more information about the page file, see Microsoft Knowledge Base article Q197379.

Service Packs and UpdatesFor consistency and reduced troubleshooting, install service packs and hotfixes on all servers in the server farm. Important You can find late-breaking information and links to citical updates for server operating systems and for Citrix installation files in the online Preinstallation Update Bulletin. Open the Pre-installation Checklist to access a link to this site. You can find the checklist on the MetaFrame Presentation Server autorun screen.

18

Advanced Concepts Guide

Servers in a server farm use Microsoft Jet drivers extensively. The Microsoft Jet Database Engine is used by the local host cache on every server in the farm. It is also used when Resource Manager is installed. Citrix recommends installing Microsoft service packs for the Microsoft Jet Database Engine. Older versions contain memory leaks that appear as IMA Service memory leaks. Apply these service packs and patches before installing MetaFrame Presentation Server on the servers. See Microsoft Knowledge Base article 273772 for more information. Important A memory leak in the Microsoft Jet Database Engine is fixed in Windows Server 2000 Service Pack 2. To use MetaFrame XP Feature Release 3 or earlier versions on a Windows Server 2000 system on which Windows Server 2000 Service Pack 2 is not installed, you must install the hotfix described in TechNet article 273772, FIX: Memory Leak in Jet ODBC Driver with SQL NUMERIC or SQL C BINARY Data. MetaFrame Presentation Server 3.0 requires Windows Server 2000 Service Pack 4.

Changing the Maximum Buffer SizeThe amount of memory consumed by the Citrix IMA Service can be reduced by changing MaxBufferSize in a registry entry for the Microsoft Jet 4.0 database engine. 1. Run regedt32. 2. Locate the registry entry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Jet 4.0 3. Double-click MaxBufferSize in the right pane. 4. In the DWORD Editor dialog box, enter 0x200 in the Data box. Accept the default radix, Hex, in the Radix box. 5. Click OK. Caution Observe precautions when editing the registry. See Microsoft documentation for more information about backing up and editing the registry.

Note Installing a new Microsoft Data Access Components (MDAC) or Microsoft Jet Database Engine service pack may reset MaxBufferSize to its default setting. Be sure to check this setting after applying any MDAC or Jet updates.

Chapter 2 Planning Your Deployment

19

Planning User CapacityThe number of users that a server running MetaFrame Presentation Server can support depends on several factors including: The servers hardware specifications The CPU and memory requirements of the applications that are being run The amount of user input being processed by the applications

The following sections present benchmark results regarding user capacity, examine the potential benefits of Hyper-Threading, and discuss how to achieve realistic sizing for the server by using Resource Manager.

Benchmarking Client SessionsCitrix conducted benchmarking tests to quantify the number of simulated MetaFrame Presentation Server Client sessions that can be connected to a server and still have acceptable performance. These tests simulated users continuously working with Microsoft Office applications with a step size number of users defined as five. During each test, the following steps were taken after the first five users were logged on: 1. Launched simulated user scripts on all five sessions. 2. Opened Microsoft Excel and simulated the creation of a spreadsheet. 3. Opened Microsoft Access and simulated the creation of an Access database. 4. Opened Microsoft PowerPoint and created a presentation. Based on how long the test steps took to complete, a benchmarking score was calculated. For these tests, a score of 80 was determined as the optimal load for a server, meaning that the server had enough additional CPU and memory resources to handle spikes in performance.

Test ConfigurationThe benchmarking test was conducted with the following iteration structure and hardware and software configurations:

User IterationStep Size: five users Iterations: 36 iterations Total simulated users in this test: 180

20

Advanced Concepts Guide

Server ConfigurationDell PowerEdge 2650 Dual Intel Pentium 4 Xeon Processor: 3.06GHz with 512KB L2 Cache and 1MB L3 Cache 533MHz front side bus speed Hyper-Threading enabled 36GB HDD with Dell PERC 3/Di Raid Controller 4GB RAM 4GB page file Windows Server 2003 MetaFrame Presentation Server 3.0 Enterprise Edition Microsoft Office XP Excel XP, PowerPoint XP, and Access XP

Client ConfigurationCPQ ML350 600MHz Pentium III with 256KB Cache 256MB RAM Windows Server 2000 Service Pack 4 Citrix 32-bit Program Neighborhood

Test ResultsA servers degradation point was considered to have been reached when its score fell below 80. The results showed a maximum of 177 simulated users concurrently running Microsoft Office applications before this threshold was reached. In this test environment, increasing the number of concurrent users beyond this would have resulted in decreased server and, therefore, user performance.

Chapter 2 Planning Your Deployment

21

Hyper-Threading and User CapacityHyper-Threading is technology developed by Intel and introduced in the Pentium 4 Xeon line of processors. It enables a single physical processor to appear as two logical processors to the operating system, allowing multi-threaded programs to take advantage of extra execution units on the processor and leading to performance increases for some applications, particularly those that are multi-threaded. The following test used the benchmarking procedure described above and gauged the performance benefits of Hyper-Threading a server running MetaFrame Presentation Server by comparing its degradation point to that of a server that was not Hyper-Threaded. The dual-processor server configurations that were compared in this test had Hyper-Threading enabled or disabled in their system BIOS.

Test ConfigurationThe test was performed on Dell PowerEdge 2650 dual processor systems with Hyper-Threading-capable Pentium 4 Xeon processors. The benchmarking test was conducted with the following iteration structure and hardware and software configurations:

User IterationStep Size: five users Iterations: 36 iterations Total simulated users in this test: 180

Server ConfigurationDell PowerEdge 2650 Dual Intel Pentium 4 Xeon Processor: 3.06GHz with 512KB L2 Cache and 1MB L3 Cache 533MHz front side bus speed Hyper-Threading enabled and disabled 36GB HDD with Dell PERC 3/Di Raid Controller 4GB RAM 4GB page file Windows Server 2003 MetaFrame Presentation Server 3.0 Enterprise Edition Microsoft Office XP Excel XP, PowerPoint XP, and Access XP

Client Hardware ConfigurationCPQ ML350 600 MHz Pentium III with 256KB Cache 256MB RAM Windows Server 2000 Service Pack 4 Citrix 32-bit Program Neighborhood

22

Advanced Concepts Guide

Test ResultsThe result of this test was that the server with Hyper-Threading-enabled had a 30% performance increase over the same server with Hyper-Threading disabled. In other words, the Hyper-Threaded server was able to support 30% more concurrent user sessions before reaching its degradation point.

Configuration Hyper-Threading Disabled Hyper-Threading Enabled

Number of Simulated Users 123 +/- 1.5 177 +/- 1.5

% Difference NA + 30%

This graph shows the performance benefits of Hyper-Threading. The HyperThreaded server supported more user sessions before reaching its degradation point.

Chapter 2 Planning Your Deployment

23

Using Resource Manager to Determine User CapacityResource Managers summary database stores information concerning CPU and memory usage for various processes running on MetaFrame Presentation Server. With this information, you can estimate user capacity of a server: 1. Add a server to the farm, or create a new farm and limit user access to approximately 20 users per processor. 2. Enable Resource Managers summary database. 3. Run tests that include the launch and use of applications running on that server. 4. Create a Crystal report that queries the information required to assess server capacity. The report should contain the following information: Average CPU and memory usage for the specific processes Average CPU and memory usage for other processes, such as Explorer.exe or Winlogon.exe A defined threshold, such as no more than 90% CPU usage and/or no more than 3GB of RAM used A calculation to extrapolate the number of users that can be divided into the threshold given the resource usage above

The longer you can use these tests, the better the data averages you can collect from the summary database.

Master page: First

CHAPTER 3

Facilitating Server Farm Communication

This chapter examines the communication that occurs between servers in a MetaFrame Presentation Server farm. Among the topics discussed are the roles of zones and data collectors in a farm, the bandwidth requirements of various types of server communication, implementing a farms data store, and multihoming servers in the farm.

Understanding Zones and Data Collector CommunicationTo understand server farm communication, you must be aware of how farms are divided into zones and the role that data collectors play in keeping track of changes in zones. Zones in a server farm perform two functions: Collecting data from member servers Distributing changes to all servers in the farm

All member servers must belong to a zone. By default, the zone name is the subnet ID on which the member server resides. The zone data collector maintains all load and session information for every server in its zone. Each data collector has a connection open to all other data collectors in the farm. This connection is used to immediately relay any changes reported by servers that are members of the zone by that zones data collector to the data collectors of all other zones. Thus all data collectors are aware of the licensing, and session information for every server in the farm. The formula for interzone connections is N * (N-1)/2, where N is the number of zones in the farm.

Header (on master page) 26 Advanced Concepts Guide

If a zones data collector does not receive communication from a server in the zone within the configured time interval, the zone data collector pings the server to verify that it is online. The default interval is once a minute. A data collector will also ping any other data collectors if it does not receive any data from the target server within the configured time interval. This interval is configurable by adding the following value to the registry. The interval, in milliseconds, is expressed in hexadecimal notation. HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\IMA\Runtime\ KeepAliveInterval (DWORD) Value: 0xEA60 (60,000 milliseconds default) In normal operation, data collectors are synchronized through frequent updates. Occasionally, an update sent from one data collector to another data collector can fail. Instead of repeatedly trying to contact a zone that is down or unreachable, a data collector waits a specified interval before retrying communication. The default wait interval is five minutes. That value is configurable by adding the following value to the registry. The interval, in milliseconds, is expressed in hexadecimal notation. HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\IMA\Runtime \GatewayValidationInterval (DWORD) Value: 0x493E0 (300,000 milliseconds)

Configuring Data Collectors in Large ZonesBy default, a single zone supports 512 member servers (prior to MetaFrame XP Feature Release 3, the default was 256 member servers). If a zone has more than 512 member servers, each zone data collector and potential zone data collector must have a new registry setting. This new setting controls how many open connections to member servers a data collector can have at one time. To prevent the data collector from constantly destroying and recreating connections to stay within the limit, set the registry value higher than the number of servers in the zone. You can configure this value by adding the following value, expressed in hexadecimal notation, to the registry: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\IMA\Runtime\MaxHostAddress CacheEntries (DWORD) Value: 0x200 (default 512 entries) Note If you do not have more than 512 servers in a zone, increasing this value will not increase the performance of a zone

(on master page) Header Chapter 3 Facilitating Server Farm Communication

27

Approximating Traffic Generated by Updates to the Data CollectorWhen a member server updates the data collector with requests and changed data, the amount of traffic this generates is represented by the following formulas. In turn, a small amount of traffic is then sent from the data collector to the member server. This traffic accounts for approximately one half of the data sent from the member server to the data collector, so for full bandwidth utilization, multiply the number of bytes from the formula by 1.5. To approximate the amount of traffic destined for the data collector, multiply the number of bytes from the formula by the number of member servers in the zone. These numbers are an approximation from data gathered by Citrix. Actual results may vary. MetaFrame Presentation Server 3.0: Bytes = 4900 + (200*Con) + (100*Discon) + (300*Apps) MetaFrame XPe / FR3: Bytes = 6300 + (200 * Con) + (100 * Discon) + (150 * Apps) MetaFrame XPe / FR2: Bytes = 3800 + (600 * Con) + (400 * Discon) + (300 * Apps) MetaFrame XPe / FR1: Bytes = 3300 + (400 * Con) + (250 * Discon) + (150 * Apps) MetaFrame XPe: Bytes = 11000 + (1000 * Con) + (600 * Discon) + (350 * Apps) Where: Con = Number of connected sessions Discon = Number of disconnected sessions Apps = Number of published applications in the farm

28

Advanced Concepts Guide

Limiting Data Collector CommunicationTo maintain consistent information between zones, data collectors must relay all of their information to all of the other data collectors in the farm. Citrix recommends that you limit the use of zones to avoid the bandwidth costs associated with the replication of zone data. During a zone update, approximately the same amount of data is transmitted between data collectors, so for full bandwidth utilization, double the bytes from the following formulas. To approximate the amount of traffic across all data collector links, multiply the number of bytes obtained from the formula by the number of data collectors minus 1 in the farm. To approximate the amount of data sent between two data collectors during a full zone transfer, use the following formula: MetaFrame Presentation Server 3.0 : Bytes = 9530 + (300*Con) + (300*Discon) + (500*Apps) MetaFrame XPe: / SP3 Bytes = (7400 + (6.3 * Srv_Zone))+ (400 * Con) + (200 * Discon) + (300 * Apps) MetaFrame XPe: / SP2 Bytes = 17000 + (600 * Con) + (300 * Discon) + (600 * Apps) Where: Con = Number of connected sessions Discon = Number of disconnected sessions Apps = Number of published applications in the farm Note You can use a third-party solution, such as Packeteer PacketShaper, to dedicate bandwidth for IMA traffic, which uses port 2512 by default, to avoid flooding the network in WAN environments. Packeteer PacketShaper 5.02 automatically supports IMA traffic. You can configure other third-party solutions to recognize IMA traffic by port number.

Chapter 3 Facilitating Server Farm Communication

29

Traffic for Session-Based EventsThe following tables illustrate the impact to network traffic and the amount of data transmitted for session-based events. Each time these events occur, the member server sends data to the zones data collector, which then sends data to all other data collectors in the farm.

MetaFrame Presentation Server 3.0Event Connect Disconnect Reconnect Logoff Data transmitted (approximate) 0.51KB 0.48KB 0.47KB 0.30KB

MetaFrame XPe / Feature Release 3Event Connect Disconnect Reconnect Logoff Data transmitted (approximate) 0.86KB 0.68KB 0.70KB 0.63KB

MetaFrame XPe / Feature Release 2Event Connect Disconnect Reconnect Logoff Data transmitted (approximate) 1.20KB 1.3KB 1.2KB 0.65KB

30

Advanced Concepts Guide

Traffic for Inter-Zone CommunicationThe following tables list the amount of data sent by one data collector to another when operations are performed by the Presentation Server Console on servers that reside in different zones.

MetaFrame Presentation Server 3.0Event Presentation Server Console server query Application publishing Changing a zone data collector Data transmitted (approximate) 0.27KB 2.7KB 12KB

MetaFrame XPe / Feature Release 3Event Presentation Server Console server query Application publishing Changing a zone data collector Data transmitted (approximate) 0.53KB 0.92KB 29KB

MetaFrame XPe / Feature Release 2Event Presentation Server Console server query Application publishing Changing a zone data collector Data transmitted (approximate) 0.42KB 0.75KB 25KB

The bandwidth consumed when you publish an application varies depending on the number of servers in the server farm. In general, the amount of bandwidth consumed increases 390 bytes for every additional server in the server farm. Starting a new server generates the most amount of traffic to the other data collectors. Starting a new server generates about 8.9KB worth of traffic to the data collector in a default configuration.

Chapter 3 Facilitating Server Farm Communication

31

Bandwidth Requirements for Server Farm EventsThe following sections discuss and quantify the bandwidth requirements for normal communication in a server farm. This information can be used to determine potential bandwidth requirements for WAN-based farms. Note Data in the following sections was gathered from tests performed using a Microsoft SQL 2000 data store. These results might not apply to all situations and recommendations will vary based upon how much bandwidth is being used by other network applications.

Communication during Initial Farm StartupThe amount of data (in kilobytes) read from the data store during the initial startup of the server farm is approximated by the following formulas: MetaFrame Presentation Server 3.0: 431 + 3.15*(Srvs -1) MetaFrame XPe / Feature Release 3: 402 + 6.82*(Srvs -1) MetaFrame XPe / Feature Release 2: 456 + 22*(Srvs-1) + .0720*Apps where: Srvs = the number of servers in the farm Apps = the number of published applications in the farmin MetaFrame XP Feature Release 3 and later the number of published applications no longer applies to the equation. The following diagram shows the initial startup of a server farm:

32

Advanced Concepts Guide

When you start a server, it must initialize the IMA Service and also register with the data collector for the zone in which it resides. This communication occurs in the following sequence of events: 1. The IMA Service establishes a connection to the farms data store and then downloads the information it needs to initialize. It also ensures that the data contained in its local host cache is current. 2. After the IMA Service is initialized, the member server registers with the data collector for the zone. 3. Next, the data collector relays all of the updated information written by the member servers in the zone to all other data collectors in the farm to keep them synchronized with each other. The collector-to-collector updates are a function of the amount of information that is updated by the member server. The data collectors replicate only the items that changed; they do not replicate all their tables every time an update is sent. Note In the preceding diagram, there are only two zones. The data collector replicates only the updates it receives from the member servers once to the other data collector. If, for example, there are three zones, the data collector has to replicate the same information twice. This causes higher bandwidth consumption and places a higher load on the data collectors in the farm. The amount of data read from the data store can require higher bandwidth as the farm size increases and certain actions are executed, especially when several servers are started simultaneously. Most network traffic consists of reads from the database. Citrix recommends that the data store be replicated across all high latency or low bandwidth links. A replicated data store allows all reads to occur on the network local to the server, resulting in improved farm performance. If performance across the WAN is an issue, and having a replicated database at each site is cost prohibitive, analyze the WAN links for alternative solutions. The IMA Service start time ranges from a few seconds to several minutes. When the amount of data requested from the data store by the IMA Service is greater than the size of the pipe between WAN segments, IMA waits for all of the data, resulting in a longer startup time. Note When the IMA Service takes a long time to restart, an error message appears on the Presentation Server Console stating that the IMA Service could not be started. The event log can have a message that states that the IMA Service hung on starting. These errors are benign. The IMA Service does start properly after the requests to the data store are serviced.

Chapter 3 Facilitating Server Farm Communication

33

Ordinary Farm CommunicationThe following figure shows the communication that occurs within a farm once it is up and running:

Every 30 minutes a coherency check is performed between the member servers local host cache and the data store. If neither has changed, this operation consumes approximately only 200 bytes of bandwidth. If the check determines that something changed, the member server searches the data store to determine what changed and updates the information in the local host cache. To ensure that the servers in its zone are functional, the data collector sends an IMAPing to each of the member servers in its zone if it has not received an update from the member server within the last 60 seconds. The data collector also asks the member server for its server load if it has not received a load update within the past five minutes. Finally, the data collectors query the other data collectors in the farm to ensure they are still data collectors, and to ensure they are still operational if they have not received an update in the last 60 seconds.

34

Advanced Concepts Guide

Event-Based CommunicationMost traffic is a result of the generation of events, such as when a client connects, disconnects, or logs off. The member server sends updates about its load, license count, and so on to the data collector in its zone. The data collector in turn must replicate this information to all the other data collectors in the farm. The following diagram shows what occurs when a user logs on:

1. The client device requests that the data collector determine the least loaded servers in the farm. 2. The client then connects to the least loaded server returned by the data collector. 3. The member server then updates its load, licensing, and connected session information to the data collector for its zone. 4. The data collector then forwards this information to all the other data collectors in the farm.

Chapter 3 Facilitating Server Farm Communication

35

New Data Collector ElectionWhen a server in the farm is unable to communicate with the zone data collector, a process is initiated to elect a new data collector. The following diagram shows an example of what occurs in a farm after a new data collector is elected:

1. The existing data collector for Zone 1 has an unplanned failure, such as a RAID controller failing, and causes the server to present a fatal error. If the server is shut down appropriately, it triggers the election process before going down. 2. The servers in the zone recognize the data collector has gone down and start the election process. In this example, the backup data collector is elected as the new data collector for the zone. 3. The member servers in the zone send their information to the new data collector for the zone. 4. In turn, the new data collector replicates this information to all other data collectors in the farm. Note The data collector election process is not dependent on the data store. If the data collector goes down, sessions connected to other servers in the farm are unaffected.

36

Advanced Concepts Guide

Local Host Cache Change EventsWhen configuration changes are made in the Presentation Server Console, the changes are propagated across the farm using notification broadcasts. These broadcasts take place when a change is made that is under 10KB in size, which is usually the case. These broadcasts help to minimize WAN traffic and alleviate contention on the data store. If a server misses a change notification, it picks up the change the next time it does a local host cache coherency check. The following diagram shows an example of local host cache change events communication:

1. The administrator makes a change in the Presentation Server Console affecting all the servers in the farm. 2. The server that the console is connected to updates its local host cache and writes the change to the data store. 3. The server forwards the change to the data collector for the zone in which it resides. The data collector updates its local host cache. 4. The data collector forwards the change to all the member servers in its zone and all other data collectors in the farm. All servers update their local host caches with the change. 5. The data collectors in the other zones forward the update to all the member servers in their zones, and they subsequently update their local host caches.

Chapter 3 Facilitating Server Farm Communication

37

Presentation Server Console CommunicationWhen the Presentation Server Console is launched, it gathers information from several different sources. It pulls static information such as the server list from the data store, dynamic data session information from the data collector, and Resource Manager-specific information from the farm metric server. The following table illustrates consumption of bandwidth to the data store when the following actions are performed using the console:

MetaFrame Presentation Server 3.0Action Server enumeration (one server) Server details (one server) Server query Application enumeration (one application) Application query Add Resource Manager metric to application Add Resource Manager metric to application and configure Change farm metric server Any Resource Manager report on the local server Data transmitted (in KB) 0 169.88 26.10 11.97 145.06 11.33 21.95 15.88 6.58

MetaFrame XPe / Feature Release 3Action Server enumeration (one server) Server details (one server) Server query Application enumeration (one application) Application query Add Resource Manager metric to application Data transmitted (in KB) 0.66 14.31 7.13 0.71 15.57 3.69

38

Advanced Concepts Guide

Action Add Resource Manager metric to application and configure Change farm metric server Any Resource Manager report on the local server

Data transmitted (in KB) 6.76 5.29 1.56

MetaFrame XPe / Feature Release 2Action Server enumeration (one server) Server details (one server) Server query Application enumeration (one application) Application query Add Resource Manager metric to application Add Resource Manager metric to application and configure Change farm metric server Any Resource Manager report on the local server Data transmitted (in KB) 13 20 193 6 4 11 21 45 5.13

Tip When using the Presentation Server Console to monitor a farm at a remote site, you can conserve bandwidth across the WAN by publishing the console application on a remote server and connecting to it using a client locally.

Chapter 3 Facilitating Server Farm Communication

39

Implementing the Data StoreThe following sections discuss network configurations that affect the data store, setting up the data store in a Storage Area Network (SAN), and some special data store scenarios.

Data Store Network OptimizationsIn large farms with powerful database servers, the network can become a performance bottleneck when reading information from the data store. This is particularly true when the database server hosts various resource-intensive databases. To find out if the network is the bottleneck, monitor the CPU usage on the data store. If the CPU utilization is not at 100% while the IMA Service is starting, and it is still in the process of starting, the network can be the bottleneck. If the CPU utilization is at or near 100%, it is likely that additional processor(s) may be needed.

Teaming Network Interface Card ConfigurationsTo avoid a potential network bottleneck, Citrix recommends that you use a teaming Network Interface Card (NIC) solution to improve the bandwidth to the data store. NICs and switch ports should always be manually configured to support full duplex and the highest speed available on both devices. This is because the automatic sensing of router settings by the NICs does not always result in an optimal or compatible configuration. If the speed or duplex settings are configured incorrectly, frames likely will be dropped. Many servers come with two installed NIC ports. These NICs can be configured as follows, listed in the order of Citrix's recommendation: Utilize both NICs and team using switch-assisted load balancing within the same subnet if connecting to different blades within a large Layer 3 switch Utilize both NICs and team using adaptive load balancing within the same subnet if connecting to different blades within a large Layer 3 switch Utilize both NICs and configure for failover onto two separate switches Utilize one NIC and disable the second Utilize both NICs and multihome to two different subnets

If two NIC and switch ports are available, these can be teamed, configured for failover, or multihomed. Of these two options, Citrix recommends that you use NIC teaming when the switch ports are located on different blades within a large Layer 3 switch (for example, Cisco 6500 series) because NIC teaming enables both failover and redundancy in addition to higher throughput.

40

Advanced Concepts Guide

Although the Layer 3 switch does represent a single point of failure in this case, most large Layer 3 switches have an extremely low failure rate. More commonly, an individual blade may fail. If a large Layer 3 switch that supports teaming across blades is not available, a failover configuration is the best option. While multihoming is a supported practice, NIC teaming is considered to be the better option in nearly all situations. Multihoming is often configured incorrectly, and security holes could be opened because access control lists configured on the router are bypassed. If it is impossible to team the NICs and switch ports of all of the servers in the farm, Citrix recommends that you apply this recommendation to the following servers at a minimum: Data store server(s) Web Interface server(s) Secure Gateway server(s) Secure Ticket Authority server(s) Secure Access Manager server(s) Zone data collector(s)

Note Citrix recommends teaming NICs using the MAC address, not the IP address because the MAC address is not subject to modification unless the burnedin address (BIA) is modified, The MAC address is a more basic and stable configuration. Follow the switch vendor's recommended practice for manually configuring teaming or aggregating of the switch ports.

Network Fault ToleranceNetwork fault tolerance functions as a failover option and provides the safety of an additional backup link between the server and the switch. If the primary adapter fails, the secondary adapter takes over with very minor interruption in server operations. When tested by Citrix, failover caused an interruption of less than 0.5 seconds and did not provide any noticeable impact on existing client sessions.

Chapter 3 Facilitating Server Farm Communication

41

Transmit Load BalancingThis option, formerly known as adaptive load balancing, creates a team of adapters to increase transmission throughput and ensure that all network users experience similar response times. All adapters must be linked to the same network switch. As adapters are added to the server, they are grouped in teams to provide a single virtual adapter with increased transmission bandwidth. For example, a transmit load balancing team containing two Fast Ethernet adapters configured for full-duplex operation provides an aggregate maximum transmit rate of 200Mbps and a 100Mbps receive rate resulting in a total bandwidth of 300Mbps. One adapter is configured for transmit and receive, while the others are configured for transmit only. Adapter teams configured for transmit load balancing provide the benefit of network fault tolerance because if the primary adapter that supports both transmit and receive fails, another adapter then supports this functionality.

Switch-Assisted Load BalancingCitrix tested data store connectivity on a 100 Mbps switched LAN. This testing was also repeated in a Gigabit Ethernet environment. It was found that two NICs that were teamed using switch-assisted load balancing; that is, 400Mbps throughput, provided ample throughput without the additional cost associated with Gigabit NICs, cables, and switch ports. However, in very large environments, Gigabit connectivity may be beneficial. Unlike transmit load balancing, you can configure switch-assisted load balancing, also known as Fast EtherChannel (FEC), to increase both transmitting and receiving channels between the server and switch. For example, an FEC team containing two Fast Ethernet adapters configured for full-duplex operation provides an aggregate maximum transmit rate of 200Mbps and an aggregate maximum receive rate of 200Mbps, resulting in a total bandwidth of 400Mbps. All adapters are configured for transmit and receive, with the load spread roughly equal. FEC works only with FEC-enabled switches. The FEC software continuously analyzes load on each adapter and balances network traffic across the adapters as needed. Adapter teams configured for FEC not only provide additional throughput and redundancy but also provide the benefits of network fault tolerance. For more information, see Citrix Knowledge Base article CTX434260 and/or contact your hardware vendor.

42

Advanced Concepts Guide

Implementing the Data Store in a Storage Area NetworkA Storage Area Network (SAN) is a dedicated high-speed network that is separate and distinct from the Local Area Network (LAN). A SAN provides shared storage through an external disk storage pool. The SAN is a back-end network that carries only I/O traffic between servers and a disk storage pool while the front-end network, the LAN, carries email, file, print, and Web traffic. Implementing your server farms data store in a SAN can provide increased reliability and improved performance.

Fibre Channel TechnologyThe most commonly used SCSI technology for SAN implementations is Fibre Channel. Fibre Channel is the standard for bidirectional communications and it features high performance, serial-interconnections. Fibre Channel has the following capabilities: Bidirectional data transfer rates up to 200Mbps Support for up to 126 devices on a single host adapter

Communications up to 20km (approximately 12 miles) Fibre Channel implementations can use either of the following networking technologies: Fibre Channel Arbitrated Loop (FC-AL) FC-AL networks use shared media technology similar to Fibre Distributed Data Interface (FDDI) or Token Ring. Each network node has one or more ports that allow external communication; FC-AL creates logical point-to-point connections between ports. Fibre Channel Fabric (FC-SW) Fabric networks use switched network technology similar to switched Ethernet. A fabric switch divides messages into packets containing data and a destination address, and then transmits the packets individually to the receiving node, which reassembles the message. Fabric switches can cascade, allowing a SAN to support thousands of nodes.

Chapter 3 Facilitating Server Farm Communication

43

Hardware ComponentsStorage Area Networks typically include the following hardware components: Host I/O Bus The current I/O bus standard is Peripheral Component Interface (PCI). Older standards include Industry Standard Architecture (ISA) and Extended Industry Standard Architecture (EISA). Host Bus Adapter The host bus adapter (HBA) is the interface from the server to the host I/O bus. The HBA is similar in function to a Network Interface Card (NIC), but is more complex. HBA functions include the following: Converting signals passed between the LAN and the SANs serial SCSI Initializing the server onto a FC-AL network or providing a Fabric network logon Scanning the FC-AL or Fabric network, then attempting to initialize all connected devices in the same way that parallel SCSI scans for logical devices at system startup

Cabling Fibre Channel cables include lines for transmitting and for receiving. Because of the shape, you cannot install them incorrectly. SAN networking equipment There are many similarities between a SAN and other networks such as a LAN. The basic network components are the same: hubs, switches, bridges, and routers. Storage devices and subsystems A storage subsystem is a collection of devices that share a power distribution, packaging, or management system such as tape libraries or RAID disk drives. Tape backup SANs provide easy, on-the-fly tape backup strategies. Tape backups are much quicker and consume fewer resources, because all of the disk access occurs on the SANs fiber network, not on the LAN. This allows the data store to be backed up easily even while it is in use.

44

Advanced Concepts Guide

Cluster Failover SupportThe data store is an integral part of the MetaFrame Presentation Server architecture. In large enterprise environments, it is important to have the database available all the time. For maximum availability, the data store should be in a clustered database environment with a SAN backbone. Hardware redundancy allows the SAN to recover from most component failures. Additional software such as Oracle 9i Real Application Cluster or SQL Server 2000, utilizing Microsoft Clustering Services (MSCS), allows for failover in a catastrophic software failure, and in Oracles case, improves performance. Note Software such as Compaq's SANWorks may be required to manage database clusters in certain hardware configurations. Microsoft Clustering Services (MSCS), available as part of Windows 2000 Advanced Server and DataCenter products, provides the ability to failover the data store to a functioning server in the event of a catastrophic server failure. MSCS monitors the health of standard applications and services, and automatically recovers mission-critical data and applications from many common types of failures. A graphical management console allows you to monitor the status of all resources in the cluster and to manage workloads accordingly. In addition, Windows 2000 Advanced Server and Datacenter Server integrate middleware and load balancing services that distribute network traffic evenly across the clustered servers. Redundancy and recovery can be built into each major component of the data store. Deploying the following technologies can eliminate single points-of-failure from the data store: Microsoft Clustering Services Redundant hardware Software monitoring and management tools

The basic SAN configuration shown below illustrates each clustered server with dual HBAs cabled to separate FC-AL switches. A system with this redundancy can continue running when any component in this configuration fails.

Chapter 3 Facilitating Server Farm Communication

45

Redundant SAN configuration SAN architecture is very reliable and it provides redundant systems in all aspects of the configuration with multiple paths to the network. Windows 2000 Advanced Server allows two nodes to be clustered. Windows 2000 Data Center allows four clustered nodes. If there is a software or hardware failure on the owner of the cluster node, the servers in the farm lose their connection to the database. When the servers sense that the connection was dropped, the farm goes into a two-minute wait period. The servers then attempt to reconnect to the database. If servers in the farm cannot immediately reconnect to the data store, they retry indefinitely, every two minutes. The servers automatically reconnect to the database, which has the same IP address, once it fails over to the other node of the cluster.

SQLSQL clustering does not mean that both databases are active and load balanced. With SQL clustering, the only supported clustering method allows one server to handle all the requests while the other server simply stands by waiting for the other machine to fail. Note Citrix recommends that you use Windows NT authentication for connecting to the database when installing MetaFrame Presentation Server to a clustered SQL Server.

46

Advanced Concepts Guide

OracleOracle Real Application Cluster (RAC) does allow true active-active clustering. As database requests are sent using ODBC, they are load balanced among the nodes of the cluster. This configuration provides both fault tolerance and increased performance.

SAN TuningIn addition to increased reliability, you can tune the SAN to provide better database performance. When tested by Citrix, the data store was used mainly as a repository for reading configuration information. In this configuration, the number of reads far exceeds the number of writes. For optimal data access to the data store through the SAN, you can tune the array controller on the SAN for 100% reads and 0% writes. Note Tuning the SAN for 100% reads and 0% writes still allows servers to write to the data store.

Chapter 3 Facilitating Server Farm Communication

47

Multihoming Servers in the FarmMetaFrame Presentation Server includes support for multihomed servers. This section explains how to implement MetaFrame Presentation Server on a server operating with two or more network interface cards (NICs). You can run MetaFrame Presentation Server on multihomed servers to provide access to two network segments with no direct route to each. Because each separate network uses the same MetaFrame resources, the networks can access the same server farm. Running MetaFrame Presentation Server on multihomed servers also allows you to separate server-to-server communication from client-to-server communication. This scenario is illustrated in the figure below and is the subject of the examples in this section.

Multihomed MetaFrame Presentation Server Farm

Note Citrix recommends that you do not configure multihomed servers running MetaFrame Presentation Server to operate as routers (TCP/IP forwarding).

48

Advanced Concepts Guide

Configuring the Routing TableTo successfully run MetaFrame Presentation Server on multihomed servers, you may need to manually configure the local routing tables. When Windows automatically builds the servers routing tables, the resulting network card binding order and default gateway configuration may not meet your needs. For information about changing the default gateway, see Configuring a Default Gateway on page 49. When clients request a server name or published application, the server that receives the request returns the TCP/IP address of the appropriate server. The following requests from clients require address resolution: Finding the address of the data collector Finding the TCP/IP address of a given server name Finding the TCP/IP address of the least loaded server for a published application

When a server receives an address resolution request from a client, the server compares the TCP/IP address of the client to its local routing table to determine which network interface to return to the client. If the routing table is not configured correctly, the clients request cannot be filled. The figure above illustrates two multihomed servers, each with a connection to the 10.8.1.0/24 and 172.16.1.0/24 subnets. Neither server is configured to route between the two network interfaces. The process described below occurs when a client requests a response from a computer running MetaFrame Presentation Server. 1. The client with TCP/IP address 10.8.2.20 (ICA01) sends an address resolution request to the server named MFSRV01. 2. MFSRV01 has the TCP/IP address 10.8.1.3. This server also has a second NIC with TCP/IP address 172.16.1.3. 3. ICA01 is configured with MFSRV01 for its server location. ICA01 contacts MFSRV01 and requests a load-balanced application. 4. The TCP/IP address of the least loaded server hosting the requested published application must be supplied to ICA01. MFSRV01 determines that MFSRV02 is the least loaded server. 5. MFSRV02 has two TCP/IP addresses, 10.8.1.4 and 172.16.1.4.

Chapter 3 Facilitating Server Farm Communication

49

6. MFSRV02 determines the source address of ICA01. The server uses its local routing table to determine what network interface should be returned to the client. In this case, the NIC configured on the 10.8.2.0/24 network is returned to the client. If there is no explicit entry for the NIC in the local routing table, the default route, configured automatically by Windows, is used. 7. MFSRV01 uses the local routing table to correctly respond with the 10.8.1.4 address when directing the client to MFSRV02. To set up a routing table on a multihomed server running MetaFrame Presentation Server, first configure a single default gateway and then add static routes.

Configuring a Default GatewayAlthough Windows servers build multiple default gateways, the network binding order of the NICs in the server determine which default gateway to use. Using the example illustrated in the figure above, we selected the 10.8.1.1 address as our default gateway. However, we must move the network card operating on the 10.8.1.0/24 network to the first position in the network binding order. There may be certain environments where the configuration of the network binding order will not be sufficient for the server to function properly. For example, if you have a server with two connections to the Internet where each connection provides ICA connectivity for a diverse range of IP subnets, the server uses only the default gateway of the first NIC in its network binding order (referred to as Network 1). If the server receives a request from a client on its second NIC (Network 2) that is not the default gateway, and there is no entry in the local routing table of the server for Network 2, the response to the client request is sent through Network 1, which causes the clients request to fail. Alternatively, you can remove the additional default gateway configurations from each NIC on the server. This is done through the servers TCP/IP configuration. Using servers MFSRV01 and MFSRV02 from our example, we selected 10.8.1.1 as our default gateway for both servers and removed the default gateway setting from the NICs operating on the 172.16.1.0/24 network.

50

Advanced Concepts Guide

Running the command line utility IPCONFIG on MFSRV01 returns the following:Windows IP Configuration Ethernet adapter Local Area Connection #1: Connection-specific IP Address. . . . . Subnet Mask . . . . Default Gateway . . DNS . . . . . . Suffix . . . . . . . . . . . . . . . . : : 10.8.1.3 : 255.255.255.0 : 10.8.1.1

Ethernet adapter Local Area Connection #2: Connection-specific IP Address. . . . . Subnet Mask . . . . Default Gateway . . DNS . . . . . . Suffix . . . . . . . . . . . . . . . . : : 172.16.1.3 : 255.255.255.0 :

Running IPCONFIG on MFSRV02 returns the following:Windows IP Configuration Ethernet adapter Local Area Connection #1: Connection-specific IP Address. . . . . Subnet Mask . . . . Default Gateway . . DNS . . . . . . Suffix . . . . . . . . . . . . . . . . : : 10.8.1.4 : 255.255.255.0 : 10.8.1.1

Ethernet adapter Local Area Connection #2: Connection-specific IP Address. . . . . Subnet Mask . . . . Default Gateway . . DNS . . . . . . Suffix . . . . . . . . . . . . . . . . : : 172.16.1.4 : 255.255.255.0 :

Chapter 3 Facilitating Server Farm Communication

51

Adding Static RoutesYou can define static, persistent routes to avoid potential routing conflicts. Depending on your network configuration, adding static routes may be the only way to provide ICA connectivity to a multihomed server. The data displayed below uses the example illustrated in the preceding figure. Executing the ROUTE PRINT command from a command prompt on the routing table on MFSRV01 returns the following:========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 a0 c9 2b f8 dc ...... Intel 8255x-based Integrated Fast Ethernet 0x3 ...00 c0 0d 01 12 f5 ...... Intel(R) PRO Adapter ========================================================================== ========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.8.1.1 10.8.1.3 1 10.8.1.0 255.255.255.0 10.8.1.3 10.8.1.3 1 10.8.1.3 255.255.255.255 127.0.0.1 127.0.0.1 1 10.255.255.255 255.255.255.255 10.8.1.3 10.8.1.3 1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.16.1.0 255.255.255.0 172.16.1.3 172.16.1.3 1 172.16.1.3 255.255.255.255 127.0.0.1 127.0.0.1 1 172.16.1.255 255.255.255.255 172.16.1.3 172.16.1.3 1 224.0.0.0 224.0.0.0 10.8.1.3 10.8.1.3 1 224.0.0.0 224.0.0.0 172.16.1.3 172.16.1.3 1 255.255.255.255 255.255.255.255 10.8.1.3 10.8.1.3 1 Default Gateway: 10.8.1.1 ========================================================================== Persistent Routes: None

MFSRV01 is currently configured with a default gateway using the router at 10.8.1.1. Note that the second client, ICA02, is located on the 192.168.1.0/24 network, which is accessed through the router at 172.16.1.1. For MFSRV01 to have network connectivity and to avoid using the default gateway when responding to requests from ICA02, define a static route for the 192.168.1.0/24 network:ROUTE -p ADD 192.168.1.0 MASK 255.255.255.0 172.16.1.1

52

Advanced Concepts Guide

Executing ROUTE PRINT on MFSRV01 now returns:=========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 a0 c9 2b f8 dc ...... Intel 8255x-based Integrated Fast Ethernet 0x3 ...00 c0 0d 01 12 f5 ...... Intel(R) PRO Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.8.1.1 10.8.1.3 1 10.8.1.0 255.255.255.0 10.8.1.3 10.8.1.3 1 10.8.1.3 255.255.255.255 127.0.0.1 127.0.0.1 1 10.255.255.255 255.255.255.255 10.8.1.3 10.8.1.3 1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.16.1.0 255.255.255.0 172.16.1.3 172.16.1.3 1 172.16.1.3 255.255.255.255 127.0.0.1 127.0.0.1 1 172.16.1.255 255.255.255.255 172.16.1.3 172.16.1.3 1 192.168.1.0 255.255.255.0 172.16.1.1 172.16.1.3 1 224.0.0.0 224.0.0.0 10.8.1.3 10.8.1.3 1 224.0.0.0 224.0.0.0 172.16.1.3 172.16.1.3 1 255.255.255.255 255.255.255.255 10.8.1.3 10.8.1.3 1 Default Gateway: 10.8.1.1 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 192.168.1.0 255.255.255.0 172.16.1.1 1

Configure MFSRV02 the same way. When the static routes are set up, both the clients can ping the TCP/IP addresses of both servers and the servers can ping the clients. Each server can now correctly resolve the network interface to which either client is connecting. The TCP/IP addresses that the ICA01 client can receive are 10.8.1.3 and 10.8.1.4. The TCP/IP addresses that the ICA02 client can receive are 172.16.1.3 and 172.16.1.4

Chapter 3 Facilitating Server Farm Communication

53

Using DNS Servers for Client to Server ConnectionsWhen a client connects to a server or application, the client can use several methods and protocols to enumerate the server name and address where the connection is destined. This section describes how the client interacts with Domain Name System (DNS) servers and what protocols take full advantage of this implementation. In this section, the example domains used are corp.company.com and remote.company.com. The diagram below is referenced throughout this section to provide examples of usage:

A DNS interaction example

TCP/IP+HTTP and TCP/IP+SSL ConnectionsWhen using Program Neighborhood with TCP/IP sessions, the client enumerates the server through DNS first. If the server location has the NetBIOS name of the server; for example, MetaFrame1, the client queries DNS and WINS for resolution. If the server location is set to the Fully Qualified Domain Name (FQDN); that is, MetaFrame1.corp.company.com, the client queries only DNS. Using FQDN for resolution is very useful for Internet-ready connections through public networks. A significant performance gain and network traffic reduction in enumeration/ connection exists when only DNS is used. WINS was not used in the tests for this section. Note WINS Lookups are performed only if the client device has WINS addresses set in the network devices TCP/IP properties.

54

Advanced Concepts Guide

The server location settings (Custom Connection Settings/ICA Settings) can hold up to 15 server addresses; five addresses in the Primary group, five addresses in Backup1 group, and five addresses in the Backup2 group. The client resolves all addresses in the groups, even if the first address responds to the request. This resolution is done automatically by the operating system rather than by the client. If DNS round robin is not being used, Citrix recommends that all of the servers FQDN addresses are entered in the server location settings. This ensures proper communication to the farm if one of the servers becomes unavailable. By default, the Web Interface and the Program Neighborhood Agent use TCP/IP as their default ICA connection protocol. To verify that you are using the FQDN of the servers, follow the steps below: 1. Connect to http:///Citrix/MetaFrame/WIAdmin. 2. Click MetaFrame Servers from the Content area. 3. Verify that the FQDNs are set for all applicable servers. If not, remove the names and add the FQDNs. 4. Click Save. 5. Click Apply Changes for the settings to take effect. These steps ensure that when using the Web Interface and the Program Neighborhood Agent the ICA file generated has the FQDN address of the server instead of a localhost resolution (if the Web Interface is installed on one of the servers) or NetBIOS name resolution. To have the Web Interface generate an ICA file that will connect to the server using the FQDN, perform the following: 1. Open the Presentation Server Console. 2. Right-click the farm and go to Properties. 3. Go to MetaFrame Settings and select the option Enable XML Service DNS address resolution. 4. Click OK to save the settings. This enables the Web Interface and Program Neighborhood Agent to generate the FQDN in the address field of the ICA file. This ensures proper resolution of the server even in dynamic environments. If the servers IP address changes and the DNS support dynamic updates, there is no need to change ICA and Web Interface configurations to support the change.

Chapter 3 Facilitating Server Farm Communication

55

Program Neighborhood Agent and DNSOn a default Web Interface installation, the Program Neighborhood Agents Config.xml file is populated with the NetBIOS name of the Web Interface server. Citrix recommends that you edit this file is edited to change the NetBIOS name to FQDN. This will facilitate communications throughout DNS and HTTP(S) connections. For Web Interface 2.0 and newer products: 1. Open a Web browser and go to http:///Citrix/PNAgentAdmin. 2. Go to Server Settings. 3. Verify that the Server URL contains the FQDN of the Web Interface Server. If it does not, replace the NetBIOS name with the FQDN. 4. Click Save to save the changes. This action replaces all values listed in the previous step.

DNS Round RobinDNS round robin is used to distribute TCP/IP connections across several servers with enumeration requests distributed equally across a group of servers. This is beneficial for administrators who have more than 15 servers and would like to use all of the servers at enumeration time. In most cases, the client has to perform an enumeration of the servers in the farm to balance the server loads. To use DNS round robin, create the following DNS host (A) records in DNS1 (from the network diagram above): enum.corp.company.com IN A 192.168.1.20 enum.corp.company.com IN A 192.168.1.21 and the following DNS A host records in DNS2: enum.remote.company.com IN A 192.168.2.20 enum.remote.company.com IN A 192.168.2.21 When the records are created, you can add enum.corp.company.com and enum.remote.company.com as the Server Location for Client1. For Client2, reverse the order of the enumerator addresses: first enum.remote.company.com and second, enum.corp.company.com, to ensure that the client enumerates the corp domain only when necessary. The same applies for Client1 enumerating the remote domain only when necessary.

56

Advanced Concepts Guide

When the client attempts to connect, the DNS server returns all addresses of the farms servers. Each response from the DNS server will include the IP addresses of the farms servers. If Client1 connects to a load-balanced application, the DNS server will return the IP address for MetaFrame1 and MetaFrame2 in the first response for enum.corp.company.com. It will also return the IP address for MetaFrame3 and MetaFrame4 for the second response for enum.remote.company.com. When round robin is enabled on the DNS server, it restructures the list for each client resolution that it receives by moving the first host to the end of the list. This ensures that all the farms servers in the round robin loop alternate taking client enumeration requests. In some cases, clients will