38
PeopleSoft Bandwidth Analysis Citrix Consulting Citrix Systems, Inc. Peoplesoft 8.41 Bandwidth Analysis

PeopleSoft 8.41 Bandwidth Analysis - Dell · PeopleSoft is an Enterprise Resource Planning (ERP) system that enables businesses to help customers, suppliers, and employees to work

  • Upload
    hatuong

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

PPeeoopplleeSSoofftt BBaannddwwiiddtthh AAnnaallyyssiiss

Citrix Consulting

Citrix Systems, Inc.

Peoplesoft 8.41 Bandwidth Analysis

2

Notice

The information in this publication is subject to change without notice.

THIS PUBLICATION IS PROVIDED “AS IS” WITHOUT WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,

INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-

INFRINGEMENT. CITRIX SYSTEMS, INC. (“CITRIX”), SHALL NOT BE LIABLE FOR TECHNICAL OR EDITORIAL

ERRORS OR OMISSIONS CONTAINED HEREIN, NOR FOR DIRECT, INCIDENTAL, CONSEQUENTIAL OR ANY

OTHER DAMAGES RESULTING FROM THE FURNISHING, PERFORMANCE, OR USE OF THIS PUBLICATION, EVEN

IF CITRIX HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES IN ADVANCE.

This publication contains information protected by copyright. Except for internal distribution, no part of this publication may

be photocopied or reproduced in any form without prior written consent from Citrix.

The exclusive warranty for Citrix products, if any, is stated in the product documentation accompanying such products.

Citrix does not warrant products other than its own.

Product names mentioned herein may be trademarks and/or registered trademarks of their respective companies.

Copyright © 2003 Citrix Systems, Inc., 851 West Cypress Creek Road, Ft. Lauderdale, Florida 33309-2009 U.S.A. All rights reserved.

Version History

March 21, 2003 Jo Harder Version 1.0

April 14, 2003 Jo Harder Version 1.1

Peoplesoft 8.41 Bandwidth Analysis

3

Table of Contents

INTRODUCTION ............................................................................................................................................................................ 1 KEY FINDINGS............................................................................................................................................................................... 1 DOCUMENT OVERVIEW................................................................................................................................................................... 1

EXECUTIVE SUMMARY................................................................................................................................................................ 2 OBJECTIVE ................................................................................................................................................................................... 2 APPROACH ................................................................................................................................................................................... 2 TEST SUMMARY ............................................................................................................................................................................ 3 RESULTS AND COMMENTS .............................................................................................................................................................. 3

Concurrent Network Sessions ................................................................................................................................................ 3 Comments .............................................................................................................................................................................. 4

TESTING METHODS ..................................................................................................................................................................... 5 SCRIPTED TEST............................................................................................................................................................................. 5 REAL USER TEST .......................................................................................................................................................................... 5 REAL USERS WITH TASK LIST ......................................................................................................................................................... 5 COMBINATION ............................................................................................................................................................................... 6 SCALABILITY TEST METHODS SUMMARY .......................................................................................................................................... 6 PEOPLESOFT SCALABILITY TESTING METHOD .................................................................................................................................. 6

BANDWIDTH TESTING PROCESS............................................................................................................................................... 7 PLANNING THE TEST ...................................................................................................................................................................... 7 CREATING THE SCENARIO .............................................................................................................................................................. 7 RUNNING THE SCENARIO................................................................................................................................................................ 8 MONITORING THE SCENARIO........................................................................................................................................................... 8 ANALYZING TEST RESULTS............................................................................................................................................................. 8 BASELINING .................................................................................................................................................................................. 8

CUSTOM PEOPLESOFT USER ACTIVITIES ............................................................................................................................... 9 PEOPLESOFT USER SCRIPT ........................................................................................................................................................... 9

PeopleSoft Script Narrative .................................................................................................................................................... 9 Graphic Depiction ................................................................................................................................................................. 10

Peoplesoft 8.41 Bandwidth Analysis i

ENVIRONMENT ARCHITECTURE OVERVIEW.......................................................................................................................... 11 LAB ENVIRONMENT ...................................................................................................................................................................... 11 WAN LINK.................................................................................................................................................................................. 11 CLIENT DEVICES.......................................................................................................................................................................... 11

ICA Client.............................................................................................................................................................................. 12 Internet Explorer 6.0 ............................................................................................................................................................. 12 iOpus Internet Macros .......................................................................................................................................................... 12

SERVER HARDWARE AND SOFTWARE ............................................................................................................................................ 12 Citrix MetaFrame XP Presentation Server ............................................................................................................................ 12 ICA Connection Configuration .............................................................................................................................................. 13 Internet Explorer 6.0 Configuration ....................................................................................................................................... 13 PeopleSoft ............................................................................................................................................................................ 14 eEye Iris................................................................................................................................................................................ 15 Hardware and Operating System Specifications................................................................................................................... 15

TEST CASES ............................................................................................................................................................................... 16 CASE 1: SINGLE DIRECT USER OVER 28 KBPS LINK ....................................................................................................................... 16 CASE 2: SINGLE ICA SESSION OVER 28 KBPS LINK........................................................................................................................ 16 CASE 3: SINGLE DIRECT USER OVER 1.544 MBPS LINK.................................................................................................................. 17 CASE 4: SINGLE ICA SESSION OVER 1.544 MBPS LINK .................................................................................................................. 17 CASE 5: THREE DIRECT USERS OVER 64 KBPS LINK ...................................................................................................................... 17 CASE 6: THREE ICA SESSIONS OVER 64 KBPS LINK....................................................................................................................... 17

RESULTS AND ANALYSIS ......................................................................................................................................................... 18 RESULTS .................................................................................................................................................................................... 18

Case 1 vs. Case 2 ................................................................................................................................................................ 18 Case 3 vs. Case 4 ................................................................................................................................................................ 18 Case 5 vs. Case 6 ................................................................................................................................................................ 19 Overall Data.......................................................................................................................................................................... 21

ANALYSIS ................................................................................................................................................................................... 22

APPENDIX A: TEST RESULTS................................................................................................................................................... 23 PING RESULTS (LATENCY AND RTT) ............................................................................................................................................. 23 CASE 1: 1DIR28......................................................................................................................................................................... 23

Peoplesoft 8.41 Bandwidth Analysis ii

1DIR28 Before test: 172.16.10.221->172.16.30.11 .............................................................................................................. 23 1DIR28 During test: 172.16.10.221->172.16.30.11 .............................................................................................................. 23

CASE 2: 1ICA28......................................................................................................................................................................... 24 1ICA28 Before test: 172.16.10.189->172.16.30.11 .............................................................................................................. 24 1ICA28 During test: 172.16.10.189->172.16.30.11 .............................................................................................................. 24

CASE 3: 1DIR1544..................................................................................................................................................................... 24 1DIR1544 Before test: 172.16.10.221->172.16.30.11 .......................................................................................................... 24 1DIR1544 During test: 172.16.10.221->172.16.30.11 .......................................................................................................... 24

CASE 4: 1ICA1544..................................................................................................................................................................... 25 1ICA1544 Before test: 172.16.10.189->172.16.30.11 .......................................................................................................... 25 1ICA1544 During test: 172.16.10.189->172.16.30.11........................................................................................................... 25

CASE 5: 3DIR64......................................................................................................................................................................... 26 3DIR64 Before test: 172.16.10.221->172.16.30.13 .............................................................................................................. 26 3DIR64 During test: 172.16.10.221->172.16.30.13 .............................................................................................................. 26

CASE 6: 3ICA64......................................................................................................................................................................... 26 3ICA64 Before test: 172.16.10.189->172.16.30.13 .............................................................................................................. 26 3ICA64 During test: 172.16.10.189->172.16.30.13 .............................................................................................................. 26

NETWORK BANDWIDTH TESTING ................................................................................................................................................... 27 1: 1DIR28.............................................................................................................................................................................. 27 2: 1ICA28.............................................................................................................................................................................. 28 3: 1DIR1544.......................................................................................................................................................................... 29 4: 1ICA1544.......................................................................................................................................................................... 30 5: 3DIR64.............................................................................................................................................................................. 30 6: 3ICA64.............................................................................................................................................................................. 31

Peoplesoft 8.41 Bandwidth Analysis iii

Introduction PeopleSoft is an Enterprise Resource Planning (ERP) system that enables businesses to help customers, suppliers, and employees to work together collaboratively. A PeopleSoft implementation is based on one or more customized modules that allow companies to focus on productivity and profitability. Most PeopleSoft implementations consist of several modules, such as HR and Financials.

PeopleTools is the PeopleSoft runtime architecture and integrated development environment. It is an Internet-based solution, with the users accessing the front-end via a standard web browser.

This document focuses on the bandwidth requirements associated with a standard browser implementation as compared with deploying PeopleSoft as a published application via MetaFrame XP Presentation Server with Feature Release 2.

Key Findings Because web-based applications can consume considerable bandwidth and resources, MetaFrame XP is a viable alternative that can minimize network resources, particularly when WAN or LAN bandwidth is limited. All testing was done in the Citrix consulting lab environment using PeopleSoft Financials 8.4:

• Approximately 27% less network traffic was generated

• Average network latency decreased by over 30%

• ICA packets averaged less than 25% of the size of HTTP packets (75% smaller in bytes). The smaller packet size means that at any given point and time HTTP Packets would be transmitted, the total bandwidth used would be reduced by approximately 75%. This ‘smoothing’ effect allows more users to leverage the available bandwidth without impacting other user sessions.

• ICA traffic generated more than three times the number of packets. In most networks, the impact of a larger number of packets is minimal compared with the number of bytes and latency.

Document Overview This document is divided into the following sections:

Testing Methods. Discusses the ways that this type of testing is customarily done, as well as the method that was chosen and why.

Bandwidth Testing Process. Describes how application bandwidth testing should be done in order to achieve quantifiable results.

Custom PeopleSoft User Activities. Details the PeopleSoft user script that was executed during the application bandwidth tests.

Environmental Architecture Overview. Depicts the WAN links, server configuration, and other details associated with the environment that was used for testing.

Test Cases. Describes the six test cases that were performed.

Results and Analysis. Provides detailed output of the tests that were performed in chart and graph form.

Summary. Provides a brief summation of the tests and associated results.

Peoplesoft 8.41 Bandwidth Analysis 1

Test Results. Details the test data that was obtained during each of the tests. •

Executive Summary This section includes a brief overview of the testing objective and approach, as well as a summary of the test results.

Objective The purpose of testing PeopleSoft Financials 8.4 was to determine bandwidth and performance impact on a WAN/LAN when running PeopleSoft directly via a web browser, as well as through an ICA session. The tests focused on the packets that traversed the WAN, as well as the associated latency caused by those packets. The tests detailed in this document incorporated solely the network impact associated of running PeopleSoft in a WAN environment when no other applications were being used.

Approach Testing was completed by creating two test scenarios:

• MetaFrame XP ICA sessions running PeopleSoft 8.41 as a published IE browser application.

• Standalone client using a local IE Browser to access the PeopleSoft 8.41 application via HTTP sessions

To emulate a WAN environment, all tests were performed over network links configured to deliver three specific bandwidths:

• 28 Kbps

• 64 Kbps

• 1.544 Mbps

Six test cycles were executed:

• 1DIR28. 1 Standalone client using HTTP session on a 28 Kbps link

• 1ICA28. 1 ICA session on a 28 Kbps link

• 3DIR64. 3 Standalone clients/users using HTTP session on a 64 Kbps link

• 3ICA64. 3 ICA sessions/users on a 64 Kbps link

• 1DIR1544. 1 Standalone Client using HTTP session on a 1.544 Mbps link

• 1ICA1544. 1 ICA session on a 1.544 Mbps link

The approach used for these bandwidth tests could easily be duplicated at no cost in a typical business environment with live WAN links, and readers are encouraged to repeat these tests using the tools and processes described. However, if active WAN circuits are used, the tests should be run when little to no traffic is traversing the WAN links, e.g., after business hours and when backups and database synchronizations are not occurring, and a baseline of existing network traffic should be taken.

Peoplesoft 8.41 Bandwidth Analysis 2

Test Summary The following is a high-level summary of the results. Please note that the first number indicates the test number, the second number indicates the number of sessions, the test indicator (DIR or ICA) indicates whether the test was based on directly accessing the application via a web browser or via an ICA session, respectively, and the final number indicates the WAN link speed. More details regarding these findings are discussed within the body of this document.

Test # of Packets Total KB Average Size Average Latency (milliseconds)

Latency Difference

1: 1DIR28 388 185.15 477.18 174 --

2: 1ICA28 1392 156.82 112.66 121 30% Reduction

3: 1DIR1544 375 181.64 484.38 154 --

4: 1ICA1544 1361 122.70 90.16 135 12% Reduction

5: 3DIR64 439 212.28 483.55 295 --

6: 3ICA64 1245 144.15 115.78 134 56% Reduction

Results and Comments The tests performed clearly showed that ICA traffic had a lesser impact on the network when compared with the deployment of PeopleSoft via a standard browser.

• Fewer bytes traversed the network

• At any given point HTTP traffic would have traversed the network, a smaller percentage of available bandwidth was utilized by the comparable ICA session

• Latency, and hence perceived application responsiveness, was reduced by a minimum of 12% when using ICA vs. HTTP. When bandwidth was limited and comparable user HTTP sessions saturated the link, the ICA latency reduction rose to 56%.

Network administrators should find this to be good news since WAN links are expensive, and minimizing the impact of new applications and users on the network is always a high priority. ICA provides the ability to deliver more concurrent user sessions than standalone HTTP sessions. Please see the Results and Analysis section for charts and graphs that depict the specific results achieved.

Concurrent Network Sessions Based on the figures presented in this analysis, some simple conclusions can be determined:

• Bandwidth use with a local HTTP/PeopleSoft client will be dependent upon usage patterns. This will then drive the number of supported sessions.

• When bandwidth is limited, the ‘burst’ events initiate a process that queues the network and decreases application response

Peoplesoft 8.41 Bandwidth Analysis 3

• In the 64K environment, based on average size, you could reasonably expect to get up to 4 times the number of ICA users vs. a locally deployed solution. This is due primarily to the smaller size of the ICA packet, which is 4 times smaller than the average packet deployed via the native solution. This smaller packet size results in a more efficient distribution of data across the network.

• The limiting factors for any deployed solution are:

o Maximum data bursts

o User acceptance of subjective performance degradation

• Because ICA limits the ‘burst’ of data by smoothing over many, smaller packets – and maintaining performance, users can expect to load a network link with additional ICA sessions as a factor of the difference between the maximum ICA peak and the maximum local/HTTP peak. Further, under ICA, as the user counts increase, then the minimum ICA utilization must be accounted for to ensure available bandwidth for peak activities across all users on the link.

Comments Please note that Citrix will be releasing MetaFrame XP Presentation Server, Feature Release 3 before mid-year 2003, and this release includes a more advanced version of SpeedScreen that can directly take into account the available bandwidth and modify the display of Internet Explorer. Thus, the total number of packets and associated latency with SpeedScreen Browser Acceleration promise to be even more advantageous; however, because the final version of this feature release was not available at the time of testing and writing this white paper, Feature Release 2 was used.

Peoplesoft 8.41 Bandwidth Analysis 4

Testing Methods In a scalability/performance test of Windows 2000 server with MetaFrame XP Presentation Server, decisions regarding the test methods were required in order to standardize valid testing of the environment.

There are four primary types of scalability testing that are appropriate to the MetaFrame XP Presentation Server environment:

1. Scripted Test: Automated execution of scripts that mimic user actions without any user intervention.

2. Real Users Test: Actual users execute their daily tasks without any specified order.

3. Real Users with Tasks List: Actual users execute a set of pre-defined tasks.

4. Combination: A combination of two or more of the aforementioned testing methods.

The following sections discuss each method in more detail and compare the advantages and disadvantages of each method.

Scripted Test For this method, a standard set of scripts are written to control the actions of test users that are similar to typical PeopleSoft users. These scripts are developed to simulate a desired set of predefined actions (workflows), which are based on the user’s role and applications used during a typical user session. Each workflow may contain sub-workflows that dictate the multiple paths users take to complete these daily tasks. These sub-workflows will be the basis for scripts that are generated. Initiation of script execution would be at set intervals to ensure that steps taken while working in an application are not repeated simultaneously for all virtual users during a the test. These intervals ensure more accurate results since the application is able to respond in a more realistic manner.

For the test process detailed in this document, the functional flows for these scripts have been developed by Citrix consulting and are based on PeopleSoft’s application verification user flow.

Real User Test The second method for scalability testing is to have users log into the system and perform tasks similar to those of a typical workday. The results obtained from this method are geared toward real-life scenarios. The caveat to using this method is that more variables exist in the test, such as the number of users, activities, and interruptions. This makes it more difficult to run the same exact test while increasing user load, making system configuration changes, or repeating the test.

When running this type of test, most client environments would benefit from monitoring their systems and capturing the performance counters and data in a database format. Resource manager for MetaFrame XP is designed to accomplish this, and these figures can provide significant value and accuracy, provided that a large enough population sample of data is captured.

Real Users with Task List The next method for scalability testing is a combination of Scripted Tests and Real User Testing. Real User Testing with task lists includes having real users access the system, while executing a written set of tasks in a random order. These tasks are analogous to the workflows defined in the Custom PeopleSoft Scripts section. Developing customer specific tasks for scalability testing will best represent the different types of users that will access the system on a daily basis. Each user will be accessing the system at different speeds, reflecting a realistic production environment.

Peoplesoft 8.41 Bandwidth Analysis 5

However, these users will be following a common set of tasks that will help with standardizing the scalability tests when they need to be re-run with additional users.

This type of test is resource intensive and can be difficult to coordinate. Most corporate environments cannot provide multiple resources for testing an application.

Combination The final method for scalability testing is a combination of a custom script and real users accessing the test environment. For example, five client computers emulating six users each could be used in conjunction with several Real Users performing searches and more complex customer transactions.

Scalability Test Methods Summary Table 1 - Scalability Test Methods Summary summarizes the advantages and disadvantages of each scalability test method described above.

Testing Method Advantages: Disadvantages:

Scripted Test: • No variables. Completely controlled • Identical tests can be repeated as many times as

needed • No user time required to do test • Tests can be re-run as environment grows

• Takes significant time and tools to create test scripts

• No “user skill levels” incorporated into test

Real Users Test: • Real life test • Allows for different user types and skill levels

• Impossible to have two identical tests • User’s time is needed to perform test • Need users from each ISV’s customer base

Real Users with Task List Test:

• Can be as controlled as needed • Test can be repeated with high degree of similarity

between previous tests • Allows for different user types and skill levels

• User’s time is needed to perform test • The project team will have to create a task

list for users customized to their role. This can be very complex and time consuming

Combination • Can emulate most user activities with custom scripts and live users can test actions that were not scripted and the acceptable latency.

• Multiple users’ time is needed to perform test

Table 1 - Scalability Test Methods Summary

PeopleSoft Scalability Testing Method Based on the project requirements, Citrix consulting decided to use the Scripted Test Method. This ensured identical, controlled tests that could be repeated in different contexts as necessary. The user activity scripts developed for testing were carefully formulated by Citrix consulting to accurately simulate normal user activity and are as listed in the Custom PeopleSoft User Activities section.

Simulated user sessions were replayed via an Internet Explorer-based macro on one or three client devices, as required, using an evaluation version of iOpus Internet Macro software. This allowed for network communications to occur in parallel, including contention for network bandwidth, much like a true client environment. A centralized testing tool such as Mercury LoadRunner or the Citrix Server Test Kit was not used for bandwidth testing because network utilization results might have been skewed based on all client session traffic using one NIC for incoming and outgoing communications.

Peoplesoft 8.41 Bandwidth Analysis 6

Bandwidth Testing Process As with all testing strategies, a clearly defined testing process helps to ensure accurate and repeatable results. The following section is an overview of a six-step process for testing the bandwidth requirements of an application with and without MetaFrame XP.

Planning the Test Successful testing requires development of a thorough test plan. A clearly defined test plan ensures that the user scenarios that are developed will accomplish the bandwidth testing objectives. Bandwidth test planning involves:

Defining the objectives of the bandwidth testing. Because corporations can deploy PeopleSoft via direct access from a web browser or via a MetaFrame XP session, a key differentiator can be the network bandwidth requirements.

Planning the server, network, and client configuration. Because each of these components are not mutually exclusive and have an impact on the overall tests, creating an environment that is similar to a typical scenario enables readers to better understand how PeopleSoft implementations can be deployed and associated impact on the network.

Configuration of a LAN and WAN environment that simulates a typical corporate environment. The WAN environment was configured so that two links of 28 Kbps and 64 Kbps were created and the impact of PeopleSoft-related traffic could be ascertained and analyzed. Only one link would be active during a test.

Define a typical user session. A typical user workflow would be created, used, and repeated during the course of all tests. This ensures consistency within the test results.

Prepare the environment. Because bandwidth testing can be impacted by non-related tests within the effected subnets, no other tests could be performed during that timeframe.

Monitor the environment. When the WAN link reaches saturation, the user experience should be assumed as unacceptable. For the purposes of these tests, saturation is assumed at ongoing utilization of 70% and/or peaks of 100% for several seconds.

Additional users. Where tests were run based on multiple users, these client devices spawned new sessions based on a specified delay. When new users are accessing a web-based application, the initial screen paints can consume large amounts of bandwidth. How often new user sessions are spawned should be reflective of the planned environment, e.g., call center agents typically log in within a very short span of time whereas corporate users across time zones will log in more sporadically.

Gathering data. During all user tests, network data was recorded and captured. Ongoing planning. In a corporate environment, the results of initial planning should be modified to reflect learnings,

environmental factors, potential issues, and application testing requirements.

Creating the Scenario A scenario describes the events that occur in a testing session. A scenario includes defining the keystrokes and mouse clicks that occur during a typical client session. An automated process, such as a macro recorder, should be used to emulate the actions that a typical user will be required to execute.

Peoplesoft 8.41 Bandwidth Analysis 7

Running the Scenario User load is emulated by initiating new user sessions based on a specified test scenario in specific time increments. Before executing a scenario, configuration and scheduling is defined so that consistency can be ensured. In this case, mouse clicks and text box entries were made based on the “Slow” setting within iOpus Internet Macro software, which equated to every two seconds.

Monitoring the Scenario While the scenario is being run, a network monitoring tool such as Microsoft Network Monitor, Performance Monitor, or a network protocol analyzer should be leveraged to monitor the bandwidth utilization across the WAN link. The latter is the optimal choice. In addition, the Session Monitoring and Control (SMC) Console, a component of the SMC Software Development Kit, can be used to analyze ICA network traffic to the virtual channel level, including compression and latency.

Analyzing Test Results Throughout execution of a scenario, the monitoring tool(s) record the performance of the system under tests at different load levels. This information is later organized into a readable format for archiving, analysis, and reporting.

Baselining To get a better result set from the bandwidth tests for the MetaFrame XP server environment, baseline network data should be captured before users start accessing and testing the server. In a test environment with simulated WAN links, typically no other network traffic will be traversing WAN links; however, in a test or production environment that includes live WAN links, it is likely that existing network traffic will be present.

Peoplesoft 8.41 Bandwidth Analysis 8

Custom PeopleSoft User Activities In this section, the session script that was used for testing is detailed. This script was used for all tests and was looped three times to provide sufficient data.

PeopleSoft User Script The objective of this script was to simulate the usage pattern of a typical PeopleSoft user. Programmatically, a two-second delay was inserted between each activity by selecting the “Slow” replay option, as was looping the script three times.

PeopleSoft Script Narrative 1. The user searches for a Product Related Receivable by performing the following steps:

o Click on Set Up Financials/Supply Chain

o Click on Product Related

o Click on Receivables

o Click on Options

o Click on System Functions

o Click on the Search box

o Enter “DM-07”, and hit the Enter key

o After verifying that the page and data appear, select the Home hyperlink to exit

2. The virtual user creates a new Asset

o Click on Asset Management

o Click on Owned Assets

o Click on Basic Add

o Click on Search

o Click on Add a New Value hyperlink

o Leave asset ID as “NEXT” and press enter key

o Enter information on the following fields: Description, short description, Asset Status (“Work In Progress”), and Acquisition Code (“Purchased”)

o Click on Save

o Click on Home

3. The virtual user creates a new Customer Contract

o Click on Customer Contracts

o Click on Contracts Home

Peoplesoft 8.41 Bandwidth Analysis 9

o Click on My Contracts

Graphic Depiction

C l i c k S e t U pF in a n c ia ls / S u p p ly

C h a in

S e a r c h f o rP r o d u c t -R e la t e d

R e c e iv a b le

C l i c k P r o d u c tR e la t e d

C l ic k R e c e iv a b le s

C l i c k O p t io n s

C l i c k S y s t e mF u n c t io n s

C l ic k S e a r c h b o x

E n t e r " D M - 0 7 " a n dp r e s s E n t e r

C l i c k H o m eh y p e r l i n k

C l i c k A s s e tM a n a g e m e n t

C r e a t e N e wA s s e t

C l i c k O w n e dA s s e t s

C l i c k B a s ic A d d

C l i c k S e a r c h

C l i c k A d d a N e wV a lu e h y p e r l i n k

L e a v e a s s e t I D a sN E X T a n d p r e s s

E n t e r

E n t e r D e s c r ip t io n ,S h o r t D e s c r ip t io n ,

A s s e t S t a t u s( W o r k in P r o g r e s s )

a n d A c q u is i t io nC o d e ( P u r c h a s e d )

C l ic k S a v e

C l ic k H o m eh y p e r l i n k

C l i c k o n C u s t o m e rC o n t r a c t s

C r e a t e N e wC u s t o m e rC o n t r a c t

C l i c k C o n t r a c t sH o m e

C l i c k M y C o n t r a c t s

Peoplesoft 8.41 Bandwidth Analysis 10

Environment Architecture Overview This section details the laboratory environment that was created for the purposes of this test. It also details the tools that were used as part of this testing process. All configurations and tests were done by a Cisco Certified Network Professional (CCNP).

Lab Environment The environment that was used for this test was as shown below:

S0/1 128 Kbps WAN link

S0/0 28 Kbps WAN link

Cisco 2621RT02

Cisco 2621RT01 Cisco

3512S01

172.16.20.5/30

Switch

F0/0172.16.30.1/24

172.16.30.12/24

172.16.30.11/24

172.16.20.6/30

172.16.20.9/30 172.16.20.10/30

Far side ofthe WAN

F0/1:172.16.10.2/24 F0/3

Printer172.16.30.20/24

LR01: LoadRunner172.16.30.15/24

F0/1

Switch(interconnects172.16.10.x/24

network)

CCSLABS42: MetaFrame Server (Horses Farm)172.16.10.189/24CCSLABS49: PeopleSoft/BEA Tuxedo172.16.10.190/24

172.16.30.13/24 172.16.30.14/24

Lab Network Configuration

As is shown, two back-to-back routers were used to simulate a WAN link. Testing was done using either link but not both; the unused connection was disconnected when not in use. The S0/1 link was configured to for 64 Kbps and 1544 Kbps as appropriate. All client devices were located across the far side of the WAN, and there was no traffic traversing the WAN link.

WAN Link Two Cisco 2621 routers were configured back to back to emulate WAN connections using a physical DCE/DTE cable out of each port within the WIC-2T card. During the tests, a single user session was initiated and run using a 28 Kbps link, a single user session was initiated and run using a 1.544 Mbps link, and three user connections were initiated and run using a 64 Kbps link. The routers were reconfigured between tests and several minutes were allotted so that the routing tables would be regenerated. There was no other traffic, such as FTP, printing, etc., that was traversing the WAN link.

Client Devices All of the client devices were either Windows XP or Windows 2000 Professional computers running ICA Client version 6.30.1050 and Internet Explorer 6.0. iOpus Internet Macros software, which is an Internet Explorer plug-in, was loaded on each of the

Peoplesoft 8.41 Bandwidth Analysis 11

client devices so that the same recorded macro could be replayed and looped within Internet Explorer without human intervention.

ICA Client The ICA clients on these client devices were configured using the defaults for a WAN connection, as shown. Note that the default configuration for a WAN link includes data compression and disk cache but does not queue mouse movements and keystrokes.

Internet Explorer 6.0 The standard installation of Internet Explorer version 6.0 was the browser that was used for testing. No modifications were made to Internet Explorer version 6.0. Please see Internet Explorer 6.0 Configuration below.

iOpus Internet Macros A trial version of iOpus Internet Macros software was loaded on each client device. This software enabled a recorded Internet Explorer-based macro to be run from each client device and looped three times without human intervention. For more information regarding iOpus Internet Macros software, please see: http://www.iopus.com/.

For all tests where multiple client sessions were launched, approximately a 15-second interval was used for initiating subsequent sessions.

Server Hardware and Software

Citrix MetaFrame XP Presentation Server The server CCSLABS42 was used as the MetaFrame XP server. It was based on a new installation of MetaFrame XP Presentation Server, Feature Release 2 installed with the default settings, including the Citrix XML Service being installed on TCP port 80.

A new server farm, HorseFarm, was created specifically for this test. The configuration of the published application, PeopleSoft 841, was based on publishing Internet Explorer as an application with a command line reference to the login page, as shown. PeopleSoft was published using 256 colors and full screen presentation.

The Session Monitoring and Control (SMC) Console, an example from the Citrix MetaFrame Server Software Development Kit (SDK) was loaded onto the MetaFrame XP server to be able to provide a more granular assessment of the compression, latency, and other network-related impact of each ICA session. In this SDK example, however, this data is available graphically in a read-only mode and is not comparable to network data based on direct user sessions.

Peoplesoft 8.41 Bandwidth Analysis 12

ICA Connection Configuration The default configuration of MetaFrame XP was used for the purposes of this test. None of the virtual channels, such as client drive mapping, clipboard, or audio, were disabled. No Feature Release 2 policies were applied. Thus, the following configuration was used in order to ensure consistent test results:

Advanced Settings

• Timeout Setting for Connection = Inherit User Configuration

• Timeout Setting for Disconnection = Inherit User Configuration

• Timeout Setting for Idle = Inherit User Configuration

• Required Encryption = Basic

• AutoLogon • Prompt for Password

= =

Inherit User Configuration Not selected

• Initial Program

• Only Run Published Apps

= =

Inherit User Configuration Not Selected

• User Profile Overrides = Not Selected

• On a broken or timed out connection = Inherit User Configuration

• Reconnect sessions disconnected = Inherit User Configuration

• Shadowing = Inherit User Configuration ICA Client Settings

• Client Audio Quality = Medium Client Settings

• Connect Client Drives at Logon = Inherit User Configuration (checked)

• Connect Client Printers at Logon = Inherit User Configuration (checked)

• Default to Main Client Printer = Inherit User Configuration (checked)

• Disable Client Drive Mapping = Not selected

• Disable Windows Client Printer Mapping = Not selected

• Disable Client LPT Port Mapping = Not selected

• Disable Client COM Port Mapping = Not selected

• Disable Client Clipboard Mapping = Not Selected

• Disable Client Audio Mapping = Not Selected

Internet Explorer 6.0 Configuration Internet Explorer 6.0 is the web browser that was used to access the PeopleSoft application. The default medium security configuration was used on the client devices, as well as the MetaFrame XP Presentation Server and PeopleSoft servers, which was:

ActiveX

• Download Signed ActiveX controls = Prompt

• Download unsigned ActiveX controls = Disable

Peoplesoft 8.41 Bandwidth Analysis 13

• Initialize and Script ActiveX controls not marked as safe

= Disable

• Run ActiveX controls and plug-ins = Enable

• Script ActiveX controls marked safe for scripting = Enable Downloads

• File download = Enable

• Font download = Enable Microsoft JVM

• Java Permissions = High Safety Miscellaneous

• Access Data Sources across domains = Disable

• Allow META REFRESH = Enable

• Display Mixed Content = Prompt

• Don’t prompt for client certificate selection when no certificates or only on certificate exists

= Disable

• Drag and drop or copy and paste files = Enable

• Installation of desktop items = Prompt

• Launching programs and files in an IFRAME = Prompt

• Navigate sub-frames across different domains = Enable

• Software channel permissions = Medium safety

• Submit non-encrypted form data = Prompt

• User data persistence = Enable Scripting

• Active scripting = Enable

• Allow paste operations via script = Enable

• Scripting of Java applets = Enable User Authentication

• Logon = Automatic logon only in Intranet zone

PeopleSoft On the CCSLABS49 server, the following software was loaded:

• PeopleSoft Financials 8.40

• PeopleSoft Edition of BEA Tuxedo version 6.5/BEA Jolt 1.2

• PeopleTools 8.41

• BEA WebLogic version 5.1.0 and Server 6.1 (SP1)

Peoplesoft 8.41 Bandwidth Analysis 14

• SQL Server 2000

Please note that most enterprise installations include separating PeopleSoft, BEA, and/or SQL Server onto distinct servers; however, for purposes of this test, all three applications were included on the same server.

eEye Iris On both the MetaFrame XP server and the PeopleSoft servers, CCSLABS42 and CCSLABS49, respectively, an evaluation copy of eEye Iris Network Traffic Analyzer version 4.06 was loaded in order to assess the network load that was passing through each of the servers. This network analyzer enabled the capture of bandwidth data to a comma-delimited file. More information regarding eEye Iris can be found at http://www.eeye.com/html/Products/Iris.

Hardware and Operating System Specifications The hardware and operating system used for each of the above-referenced servers was based on the following specifications:

Server CCSLABS42 CCSLABS49

Purpose MetaFrame XP Presentation Server, Feature Release 2

PeopleSoft Financials 8.4 and BEA Tuxedo

Operating System Windows 2000 Server with SP3 Windows 2000 Server with SP2

Microsoft Hotfixes Q147222 Q295688 Q320206 Q321599 Q322842 Q326830 Q326886

Q299956 Q311967 Q313450 Q313582 Q313829 Q314147 Q318138 Q319733 Q320176 Q320206 Q321599 Q311401

Citrix Hotfixes XE102W021 SMC SDK

N/A

Number of Processors 2 2

Processor Speed 1.4 GHz 1.4 GHz

RAM 2 GB 2 GB

Partition size of C: 18 GB 18 GB

Anti-Virus Software Trend Server Protect 5.35.0.1047 (service stopped)

Trend Server Protect 5.35.0.1047 (service stopped)

Peoplesoft 8.41 Bandwidth Analysis 15

(service stopped) (service stopped)

NIC Vendor Compaq NC7780 Gigabit Adapter Compaq NC7780 Gigabit Adapter

Configured NIC speed 100 Mbps 100 Mbps

Test Cases The bandwidth testing cases were aimed at determining the network resources that would be required to effectively run PeopleSoft 8.41. The challenges were based on learning the following:

How much bandwidth is required to support a single PeopleSoft session using ICA as compared with a web browser during steady state activities? How does minimal bandwidth versus sufficient bandwidth impact the test results?

How much bandwidth is required to support three PeopleSoft sessions using ICA as compared with a web browser during steady state activities? How does contention for network resources impact the results?

During ICA sessions, how much compression can be realized as seen via the ICA Session Monitoring and Control (SMC) Console?

In all cases, the following were included as part of each test case:

The user scenario discussed in the Custom PeopleSoft User Activities section was spawned and looped three times.

Each mouse click or text field entry was separated by approximately two seconds, and this was programmatically done by choosing for the “Slow” replay option within iOpus Internet Macros software. Approximately four seconds lapsed between the user workflow loop.

Each test case consisted of three components: an extended 500-byte ping before any network traffic was introduced, a network bandwidth capture during a user session, and three extended 500-byte pings during a user session. Please note that the network bandwidth capture and extended ping were not performed at the same time. The network capture results are based on each complete minute that the user workflow loop was run; partial minute results at the beginning and end of each loop were discarded. The median results of the three extended 500-byte pings were captured and recorded. In addition, additional data regarding the ICA sessions was captured by means of the SMC Console.

The 500-byte ping from the server to the client machine was performed before each test case was run and again approximately 15 seconds after each test had commenced. This was done to determine whether any packets were dropped, as well as to define the latency introduced by the user session(s). The following command line entry was used: ping [client IP address] –l 500 –n 20.

Case 1: Single Direct User over 28 Kbps Link 1DIR28: In this test case, one client user session was spawned directly from Internet Explorer over a simulated 28 Kbps link. A network bandwidth capture was taken for traffic between 172.16.10.221 and 172.16.30.11. A 500-byte ping was performed from the server via the following command: ping 172.16.30.11 –l 500 –n 20.

Case 2: Single ICA Session over 28 Kbps Link 1ICA28: In this test case, one client user session was spawned via MetaFrame XP over a simulated 28 Kbps link. A network bandwidth capture was taken for traffic between 172.16.10.189 and 172.16.30.11. A 500-byte ping was performed from the server via the following command: ping 172.16.30.11 –l 500 –n 20.

Peoplesoft 8.41 Bandwidth Analysis 16

Case 3: Single Direct User over 1.544 Mbps Link 1DIR1544: In this test case, one client user session was spawned directly from Internet Explorer over a simulated T-1 link. A network bandwidth capture was taken for traffic between 172.16.10.221 and 172.16.30.11. A 500-byte ping was performed from the server via the following command: ping 172.16.30.11 –l 500 –n 20.

Case 4: Single ICA Session over 1.544 Mbps Link 1ICA1544: In this test case, one client user session was spawned via MetaFrame XP over a simulated T-1 link. A network bandwidth capture was taken for traffic between 172.16.10.189 and 172.16.30.11. A 500-byte ping was performed from the server via the following command: ping 172.16.30.11 –l 500 –n 20.

Case 5: Three Direct Users over 64 Kbps Link 3DIR64: In this test case, three client user sessions were spawned in 10-second increments over a simulated 64 Kbps link. A network bandwidth capture was taken for traffic between 172.16.10.221 and 172.16.30.11. A 500-byte ping was performed from the server via the following command: ping 172.16.30.13 –l 500 –n 20.

Case 6: Three ICA Sessions over 64 Kbps Link 3ICA64: In this test case, three client user sessions were spawned via MetaFrame XP in 10-second increments over a simulated 64 Kbps link. A network bandwidth capture was taken for traffic between 172.16.10.189 and 172.16.30.11. A 500-byte ping was performed from the server via the following command: ping 172.16.30.13 –l 500 –n 20.

Peoplesoft 8.41 Bandwidth Analysis 17

Results and Analysis Following the successful completion of each of the tests cases, the results were compiled as shown below. Please note that the data supporting these results can be found in the appendices section of this document.

Results

Case 1 vs. Case 2

Measurement Case 1: 1DIR28 Case 2: 1ICA28 ICA as a Percent of DIR

Average packet size in bytes 477.18 112.66 23.6%

Total bytes 185147 156823 84.7%

Average latency (ping) 174 121 69.5%

Maximum latency (ping) 453 141 31.1%

1 Session at 28 Kbps

477.18

185.15 174

453

112.66

156.82

121141

0.00

100.00

200.00

300.00

400.00

500.00

600.00

Average packet size in bytes Total kilobytes Average latency in milliseconds Maximum latency in milliseconds

Test

Case 1: 1DIR28

Case 2: 1ICA28

Case 3 vs. Case 4

Measurement Case 3: 1DIR1544 Case 4: 1ICA1544 ICA as a Percent of DIR

Average packet size in bytes 484.38 90.16 18.6%

Peoplesoft 8.41 Bandwidth Analysis 18

Total bytes 181643 122703 67.6%

Average latency (ping) 154 135 87.7%

Maximum latency (ping) 422 188 44.5%

1 Session at 1.544 Mbps

484.38

181.64154

422

90.16

122.70135

188

0

100

200

300

400

500

600

Average packet size in bytes Total kilobytes Average latency inmilliseconds

Maximum latency inmilliseconds

Test

Case 3: 1DIR1544Case 4: 1ICA1544

Case 5 vs. Case 6

Measurement Case 5: 3DIR64 Case 6: 3DIR ICA ICA as a Percent of DIR

Average packet size in bytes 483.55 115.78 23.9%

Total bytes 212277 144148 67.9%

Average latency (ping) 295 134 45.4%

Maximum latency (ping) 656 203 30.9%

Peoplesoft 8.41 Bandwidth Analysis 19

3 Sessions at 64 Kbps

483.55

212.28

295

656

115.78144.15 134

203

0

100

200

300

400

500

600

700

Average packet size in bytes Total kilobytes Average latency inmilliseconds

Maximum latency inmilliseconds

Test

Case 5: 3DIR64Case 6: 3DIR ICA

Peoplesoft 8.41 Bandwidth Analysis 20

Overall Data The following chart represents the overall data that was captured:

Overall Data

388

1392

375

1361

439

1245

185.15 156.82 181.64122.70

212.28144.15

477.18

112.66

484.38

90.16

483.55

115.78

0

200

400

600

800

1000

1200

1400

1600

1: 1DIR28 2: 1ICA28 3: 1DIR1544 4: 1ICA1544 5: 3DIR64 6: 3ICA64

Test

# of PacketsTotal KBAverage Size

Peoplesoft 8.41 Bandwidth Analysis 21

Analysis Based on the findings described above, it is evident that ICA sessions required significantly fewer total bytes and a smaller average packet size, although ICA traffic generated a higher number of packets. The network showed far less latency when ICA sessions were deployed. Latency, a common measure of perceived application performance, was reduced by over 50% in one scenario. This could be restated that application responsiveness was doubled when running the PeopleSoft application through an ICA session – for the 3ICA64 test case.

If an organization were seeking to deploy PeopleSoft to its remote users, considering the impact on the network would be a critical risk factor. To a network administrator, ICA traffic translates to fewer bytes crossing WAN links. Because WAN circuits comprise a large portion of an IT department budget, savings in this area can translate directly to the hard dollars associated with monthly lease fees. Especially in instances were WAN links are in a fragile state and cannot handle significantly more traffic or be upgraded, deploying PeopleSoft via MetaFrame XP provides an excellent option for reducing network bandwidth consumption.

Network administrators can also appreciate the smoothing impact of fewer large packets that is typical of ICA traffic, as evidenced by the packet size and latency results. Typically, .html pages create spikes in bandwidth consumption as the page is accessed and drawn on the user’s screen. Minimizing the impact of these spikes ensures that latency is less prevalent across the WAN links. Because multiple users and multiple traffic types share WAN links, more consistent bandwidth requirements ensures that other users and applications are not subject to unpredictable network availability due to the impact of a single application. .

However, if the ICA traffic must pass through a firewall, router, or other network equipment that is in a fragile state, the significantly higher number of packets may have an adverse impact if network equipment does not have sufficient memory, CPU, and/or capabilities. Network equipment that has been purchased within the last several years will be able to support the additional number of packets without issue, but this is mentioned to ensure planners have all the facts available when evaluating options.

Peoplesoft 8.41 Bandwidth Analysis 22

Appendix A: Test Results

Ping Results (Latency and RTT) This section details the results from executing the following command on each server before and during tests: ping [client IP address] –l 500 –n 20. Please note that these are 500-byte pings, not standard 32-byte pings, and that each was repeated 20 times to ensure sufficient results. For all tests that were run during a user session, the ping test was run three times, and the highest and lowest results were discarded.

Case 1: 1DIR28

1DIR28 Before test: 172.16.10.221->172.16.30.11

1DIR28 During test: 172.16.10.221->172.16.30.11

C:\>ping 172.16.30.11 -l 500 -n 20 C:\>ping 172.16.30.11 -l 500 -n 20 Pinging 172.16.30.11 with 500 bytes of data: Pinging 172.16.30.11 with 500 bytes of data: Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=172ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=375ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=391ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=219ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=453ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Ping statistics for 172.16.30.11: Ping statistics for 172.16.30.11: Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Approximate round trip times in milli-seconds: Minimum = 125ms, Maximum = 141ms, Average = 125ms Minimum = 125ms, Maximum = 453ms, Average = 174ms

Peoplesoft 8.41 Bandwidth Analysis 23

Case 2: 1ICA28

1ICA28 Before test: 172.16.10.189->172.16.30.11

1ICA28 During test: 172.16.10.189->172.16.30.11

C:\>ping 172.16.30.11 -l 500 -n 20 C:\>ping 172.16.30.11 -l 500 -n 20 Pinging 172.16.30.11 with 500 bytes of data: Pinging 172.16.30.11 with 500 bytes of data: Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=140ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Ping statistics for 172.16.30.11: Ping statistics for 172.16.30.11: Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Approximate round trip times in milli-seconds: Minimum = 125ms, Maximum = 141ms, Average = 125ms Minimum = 125ms, Maximum = 141ms, Average = 128ms

Case 3: 1DIR1544

1DIR1544 Before test: 172.16.10.221->172.16.30.11

1DIR1544 During test: 172.16.10.221->172.16.30.11

C:\>ping 172.16.30.11 -l 500 -n 20 C:\>ping 172.16.30.11 -l 500 -n 20 Pinging 172.16.30.11 with 500 bytes of data: Pinging 172.16.30.11 with 500 bytes of data: Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=203ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=156ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=422ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126

Peoplesoft 8.41 Bandwidth Analysis 24

Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=313ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Ping statistics for 172.16.30.11: Ping statistics for 172.16.30.11: Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Approximate round trip times in milli-seconds: Minimum = 125ms, Maximum = 141ms, Average = 125ms Minimum = 125ms, Maximum = 422ms, Average = 154ms

Case 4: 1ICA1544

1ICA1544 Before test: 172.16.10.189->172.16.30.11

1ICA1544 During test: 172.16.10.189->172.16.30.11

C:\>ping 172.16.30.11 -l 500 -n 20 C:\>ping 172.16.30.11 -l 500 -n 20 Pinging 172.16.30.11 with 500 bytes of data: Pinging 172.16.30.11 with 500 bytes of data: Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=140ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=187ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=188ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=141ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=156ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=140ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Reply from 172.16.30.11: bytes=500 time=125ms TTL=126 Ping statistics for 172.16.30.11: Ping statistics for 172.16.30.11: Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Approximate round trip times in milli-seconds:

Peoplesoft 8.41 Bandwidth Analysis 25

Minimum = 125ms, Maximum = 141ms, Average = 125ms Minimum = 125ms, Maximum = 188ms, Average = 135ms

Case 5: 3DIR64

3DIR64 Before test: 172.16.10.221->172.16.30.13

3DIR64 During test: 172.16.10.221->172.16.30.13

C:\>ping 172.16.30.13 -l 500 -n 20 C:\>ping 172.16.30.13 -l 500 -n 20 Pinging 172.16.30.13 with 500 bytes of data: Pinging 172.16.30.13 with 500 bytes of data: Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=266ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=250ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=359ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=328ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=344ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=656ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=172ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=265ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=266ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=500ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=313ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=547ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=313ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=579ms TTL=126 Ping statistics for 172.16.30.13: Ping statistics for 172.16.30.13: Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Approximate round trip times in milli-seconds: Minimum = 125ms, Maximum = 141ms, Average = 125ms Minimum = 125ms, Maximum = 656ms, Average = 295ms

Case 6: 3ICA64

3ICA64 Before test: 172.16.10.189->172.16.30.13

3ICA64 During test: 172.16.10.189->172.16.30.13

C:\>ping 172.16.30.13 -l 500 -n 20 C:\>ping 172.16.30.13 -l 500 -n 20 Pinging 172.16.30.13 with 500 bytes of data: Pinging 172.16.30.13 with 500 bytes of data: Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126

Peoplesoft 8.41 Bandwidth Analysis 26

Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=141ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=203ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Reply from 172.16.30.13: bytes=500 time=125ms TTL=126 Ping statistics for 172.16.30.13: Ping statistics for 172.16.30.13: Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Packets: Sent = 20, Received = 20, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Approximate round trip times in milli-seconds: Minimum = 125ms, Maximum = 141ms, Average = 125ms Minimum = 125ms, Maximum = 203ms, Average = 134ms

Network Bandwidth Testing In all cases, the network data capture was started after user login had completed and the script was reproducing step one or two. Each test ran for approximately three minutes. To effectively compare the data, at least the first ten seconds of each test was discarded. Then, data for the next two and one-half minutes was considered valid. All data after that period was discarded.

eEye Iris was used to capture network bandwidth data for all tests. The result of each of these tests is detailed below:

Test # of Packets Total KB Average Size

1: 1DIR28 388 185.15 477.18

2: 1ICA28 1392 156.82 112.66

3: 1DIR1544 375 181.64 484.38

4: 1ICA1544 1361 122.70 90.16

5: 3DIR64 439 212.28 483.55

6: 3ICA64 1245 144.15 115.78

In addition, the first ten lines of output for each of the tests is detailed below:

1: 1DIR28

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

17:22:32:966 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3357 80 884

17:22:32:966 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3357 54

Peoplesoft 8.41 Bandwidth Analysis 27

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

HTTP

17:22:32:982 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3357 80 60

17:22:32:982 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3357 80 60

17:22:32:982 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3357 54

17:22:32:998 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3358 80 62

17:22:32:998 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3358 62

17:22:33:013 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3358 80 60

17:22:33:123 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3358 80 884

17:22:33:154 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3358 437

2: 1ICA28

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

13:9:30:020 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 71

13:9:30:020 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3215 1494 60

13:9:30:098 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 77

13:9:30:161 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 79

13:9:30:177 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3215 1494 60

Peoplesoft 8.41 Bandwidth Analysis 28

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

13:9:30:223 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 109

13:9:30:333 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 87

13:9:30:348 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3215 1494 60

13:9:30:395 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 95

13:9:30:458 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 79

3: 1DIR1544

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

13:25:32:193 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3230 80 1507

13:25:32:209 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3230 80 219

13:25:32:209 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3230 54

13:25:32:271 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3230 516

13:25:32:271 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3230 1454

13:25:32:271 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3230 1454

13:25:32:271 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3230 1342

13:25:32:271 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3230 298

13:25:32:521 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3230 80 60

Peoplesoft 8.41 Bandwidth Analysis 29

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

HTTP

13:25:32:834 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3230 80 60

4: 1ICA1544

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

13:18:31:321 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 67

13:18:31:430 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 67

13:18:31:430 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3215 1494 60

13:18:32:321 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 79

13:18:32:383 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 197

13:18:32:414 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3215 1494 60

13:18:32:477 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 131

13:18:32:555 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 335

13:18:32:602 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3215 1494 60

13:18:32:649 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3215 81

5: 3DIR64

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

Peoplesoft 8.41 Bandwidth Analysis 30

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

13:46:9:412 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3272 80 886

13:46:9:412 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3272 54

13:46:9:427 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3272 80 60

13:46:9:427 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3272 80 60

13:46:9:427 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3272 54

13:46:9:443 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3274 80 62

13:46:9:443 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3274 62

13:46:9:459 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3274 80 60

13:46:9:568 00:04:DD:5F:A7:61 00:08:02:45:4A:E6 IP TCP-> HTTP

172.16.30.11 172.16.10.221 3274 80 886

13:46:9:599 00:08:02:45:4A:E6 00:04:DD:5F:A7:61 IP TCP-> HTTP

172.16.10.221 172.16.30.11 80 3274 437

6: 3ICA64

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

17:47:0:204 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3379 67

17:47:0:314 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3379 1494 60

17:47:0:439 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3379 67

17:47:0:626 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3379 1494 60

Peoplesoft 8.41 Bandwidth Analysis 31

Timestamp Source MAC address

Dest MAC address Type Protocol Source IP

address Dest IP address

Source port

Dest port Size

>ICA

17:47:1:189 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3379 79

17:47:1:267 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3379 157

17:47:1:282 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3379 1494 60

17:47:1:439 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3379 71

17:47:1:501 00:08:02:45:4C:1A 00:04:DD:5F:A7:61 IP TCP->ICA

172.16.10.189 172.16.30.11 1494 3379 103

17:47:1:517 00:04:DD:5F:A7:61 00:08:02:45:4C:1A IP TCP->ICA

172.16.30.11 172.16.10.189 3379 1494 60

Peoplesoft 8.41 Bandwidth Analysis 32

Peoplesoft 8.41 Bandwidth Analysis

i

851 West Cypress Creek Road

Fort Lauderdale, FL 33309 954-267-3000 http://www.citrix.com

Copyright © 2003 Citrix Systems, Inc. All rights reserved. Citrix, WinFrame and ICA are registered trademarks, and MultiWin and MetaFrame are trademarks of Citrix Systems, Inc. All other products and services are trademarks or service marks of their respective companies. Technical specifications and availability are subject to change without prior notice.