5
1 STINGRAY TRAFFIC MANAGER SIZING GUIDE Stingray Traffic Manager Sizing Guide Stingray Traffic Manager version 8.0, December 2011. For internal and partner use. Introduction The performance of Stingray Traffic Manager depends on a number of factors – hardware capacity, complexity of configuration and nature of traffic and network conditions. This sizing guide is based on field experience and internal lab tests and gives cautious estimates for the hardware necessary to support various configurations. Performance characteristics of hardware platforms Stingray configurations have a performance limit (a peak SSL TPS and peak outbound bandwidth limit). The following table recommends hardware configurations to support the licensed capacity, and is based on experience with Intel Xeon platforms, running the Stingray software on a modern Linux server: Stingray Configuration Licensed performance Recommended minimum hardware Typical hardware cost Notes Stingray 1000 L 10 Mbps throughput, 1,000 SSL TPS 1 cpu core, 500 Mb RAM 2 x 1GbE nics $ 500 (e.g. 1-core virtual instance) Stingray 1000 M 200 Mbps throughput, 1,000 SSL TPS 2 cpu cores, 1 Gb RAM 2 x 1GbE nics $ 1,000 (e.g. 2-core virtual instance) Stingray 1000 H 1 Gbps throughput, 1,000 SSL TPS 2 cpu cores, 1 Gb RAM 2 x 1GbE nics $ 1,000 (e.g. 2-core virtual instance) Stingray 2000 L 1 Gbps throughput, 10,000 SSL TPS 1 x 4 core CPU, 2 Gb RAM 2 x 1GbE nics $ 2,000 Stingray 2000 M 2 Gbps throughput, 10,000 SSL TPS 1 x 4 core CPU, 4 Gb RAM 4 x 1GbE nics $ 3,500 Stingray 4000 L 5 Gbps throughput, Unlimited SSL TPS 2 x 4 core CPUs, 16 Gb RAM Multiple 1 GbE nics or 2 x 10 GbE nics $ 7,500 Estimated at 25,000 SSL TPS Stingray 4000 M 10 Gbps throughput, Unlimited SSL TPS 2 x 6 core CPUs, 24 Gb RAM 2 x 10 GbE nics $ 10,000 Benchmarked at 39,000 SSL TPS Stingray 4000 H 20 Gbps throughput, Unlimited SSL TPS 2 x 6 core CPUs, 24 Gb RAM 4 x 10 GbE nics $ 10,000 Benchmarked at 39,000 SSL TPS Stingray 4000 VH Unlimited 2 x 6 core CPUs, 24 Gb Multiple 10 GbE nics $ 10,000 Benchmarked at 39,000 SSL TPS 36 Gbits throughput Core size assumes a modern, high-performance Intel Xeon core. Other architectures may vary.

Stingray Traffic Manager sizing guide - ISCGiscg.pl/doc/riverbed/iscg_stingray_traffic_manager_sizing_guide.pdf · SIZING GUIDE: Stingray Traffic Manager 3 Maximum connections In

  • Upload
    vocong

  • View
    234

  • Download
    6

Embed Size (px)

Citation preview

Page 1: Stingray Traffic Manager sizing guide - ISCGiscg.pl/doc/riverbed/iscg_stingray_traffic_manager_sizing_guide.pdf · SIZING GUIDE: Stingray Traffic Manager 3 Maximum connections In

1 STINGRAY TRAFFIC MANAGER SIZING GUIDE

Stingray Traffic Manager Sizing Guide Stingray Traffic Manager version 8.0, December 2011. For internal and partner use.

Introduction

The performance of Stingray Traffic Manager depends on a number of factors – hardware capacity, complexity of configuration and nature of traffic and network conditions. This sizing guide is based on field experience and internal lab tests and gives cautious estimates for the hardware necessary to support various configurations.

Performance characteristics of hardware platforms

Stingray configurations have a performance limit (a peak SSL TPS and peak outbound bandwidth limit). The following table recommends hardware configurations to support the licensed capacity, and is based on experience with Intel Xeon platforms, running the Stingray software on a modern Linux server:

Stingray Configuration Licensed performance Recommended minimum hardware Typical hardware cost Notes

Stingray 1000 L 10 Mbps throughput, 1,000 SSL TPS

1 cpu core, 500 Mb RAM 2 x 1GbE nics

$ 500 (e.g. 1-core virtual instance)

Stingray 1000 M 200 Mbps throughput, 1,000 SSL TPS

2 cpu cores, 1 Gb RAM 2 x 1GbE nics

$ 1,000 (e.g. 2-core virtual instance)

Stingray 1000 H 1 Gbps throughput, 1,000 SSL TPS

2 cpu cores, 1 Gb RAM 2 x 1GbE nics

$ 1,000 (e.g. 2-core virtual instance)

Stingray 2000 L 1 Gbps throughput, 10,000 SSL TPS

1 x 4 core CPU, 2 Gb RAM 2 x 1GbE nics

$ 2,000

Stingray 2000 M 2 Gbps throughput, 10,000 SSL TPS

1 x 4 core CPU, 4 Gb RAM 4 x 1GbE nics

$ 3,500

Stingray 4000 L 5 Gbps throughput, Unlimited SSL TPS

2 x 4 core CPUs, 16 Gb RAM Multiple 1 GbE nics or 2 x 10 GbE nics

$ 7,500 Estimated at 25,000 SSL TPS

Stingray 4000 M 10 Gbps throughput, Unlimited SSL TPS

2 x 6 core CPUs, 24 Gb RAM 2 x 10 GbE nics

$ 10,000 Benchmarked at 39,000 SSL TPS

Stingray 4000 H 20 Gbps throughput, Unlimited SSL TPS

2 x 6 core CPUs, 24 Gb RAM 4 x 10 GbE nics

$ 10,000 Benchmarked at 39,000 SSL TPS

Stingray 4000 VH Unlimited 2 x 6 core CPUs, 24 Gb Multiple 10 GbE nics

$ 10,000 Benchmarked at 39,000 SSL TPS 36 Gbits throughput

Core size assumes a modern, high-performance Intel Xeon core. Other architectures may vary.

Page 2: Stingray Traffic Manager sizing guide - ISCGiscg.pl/doc/riverbed/iscg_stingray_traffic_manager_sizing_guide.pdf · SIZING GUIDE: Stingray Traffic Manager 3 Maximum connections In

SIZING GUIDE: Stingray Traffic Manager

2

Hardware Sizing guidelines

CPU Physical (dedicated) server: Intel Xeon – 2 x 4 cores suitable for most medium enterprise deployments. AMD Opteron and Oracle SPARC also supported (performance may vary). Virtual servers: 1-4 vCPUs.

Memory Minimum 0.5 Gb. 0.5 Gb adequate for development; 2 Gb minimum recommended for production Recommended: 2 Gb base, additional 1 Gb per 50,000 concurrent connections, additional memory for content caching. 4-8 Gb is appropriate for most small to medium enterprise deployments

Disk space Minimum: 1 Gb (Software), 10 Gb (Virtual Appliance). Recommended: 10 Gb. Additional disk space is used for event logs and traffic logs.

Networking Recommended: 2 nics (front-end and back-end networks). 10 GbE and trunking/bonding are supported.

Rules of thumb for judging performance

Bandwidth Up to 5 Gbits throughput is comfortably achievable on most modern 2-socket servers with sufficient networking, provided that the traffic manager configuration is straightforward (in particular, avoiding the gotchas below). Traffic profiles that lean towards large file transfers can achieve multiples of 10 Gbits provided that WAN conditions are appropriate.

Any deployments that are expected to exceed this should be sized by real-world tests.

SSL performance The Stingray SSL stack will achieve between 2,500-3,000 new SSL transactions per second per modern CPU core when using 1024-bit SSL keys, and scales evenly across multiple cores (up to the potemntial connections-per-second limit).

A new SSL transaction involves an RSA key exchange; 2048-bit keys will reduce the SSL rate by a factor of 5, to approximately 500-600 SSL TPS.

Potential SSL new-transactions-per-second performance will reduce with large requests or complex configurations. Competing hardware-accelerated ADC appliances also show performance degredation as request sizes grow.

Compression Compression performance depends on the mix of compressible and non-compressible content, and the nature of the compressible content. The Stingray Performance Reference gives an indication of plaintext throughput for fully compressible traffic; partially compressible traffic will deliver a greater throughput.

Note that the industry standard measurement for compression throughput measures the uncompressed data throughput; data on the wire will be significantly less.

Bandwidth: Very dependent on configuration and traffic mix; 5 Gbps is comfortably achievable for simple configurations.

SSL: Expect 2,500-3,000 SSL TPS per CPU core

Compression: Depends heavily on the mix of content; approximately 5 Gbits of compressible content can be compressed on a mid-range server

Max connections: Limited by memory (5-20 Kb per active connection) and file descriptors (defaults to 500k connections)

Connection rate: Refer to Stingray Performance Quick Reference.

Stingray handles traffic differently to many competing products and connection rate and concurrency figures are not comparable. Stingray figures should be related to real-world projections of customer need, not competitors’ theoretical figures.

Page 3: Stingray Traffic Manager sizing guide - ISCGiscg.pl/doc/riverbed/iscg_stingray_traffic_manager_sizing_guide.pdf · SIZING GUIDE: Stingray Traffic Manager 3 Maximum connections In

SIZING GUIDE: Stingray Traffic Manager

3

Maximum connections In practice, the limiting factor for the number of concurrent connections is the number of file descriptors available and the amount of memory in the server.

File descriptors currently defaults to 1,048,576 but can be tuned upwards if necessary1. Each active connection requires two file descriptors; an idle keepalive connection requires one.

Memory per connection varies depending on the configuration, rules, buffer sizes etc. Expect a minimum of 5k per connection; 10k for a simple HTTP configuration and 20k or greater for a more complex configuration (that uses SSL, trafficscript rules etc). As a rule of thumb, size 0.5-1 Gb for the base software and OS, and an additional 1 Gb for each 50,000 concurrently active connections you require (less for idle keepalives). This is a rough estimate and if you are expecting a large volume of traffic, more careful sizing is advised.

Connection rate Although appliance vendors often publish connections-per-second figures, they are measured against a different configuration to Stingray’s and are not comparable to Stingray’s figures. Hardware appliance vendors exploit high performance ‘FastL4’ or ‘Stream Mode’ configurations that forward packets at layer 4; these configurations are only suitable for simple load balancing and are not used in advanced deployments.

Riverbed publish the results of connections-per-second tests in the Stingray Performance Reference. Stingray operates as a reverse proxy, terminating all traffic in a similar fashion to F5’s HTTP profile (for example). Consequently, Stingray performance results are based on a configuration that is likely to be closer to a production configuration and in general cannot be compared with competing published figures.

Application Firewall (standalone version) Stingray Application Firewall (standalone) comprises of two components – the enforcer plugin (installed on the webserver) and the decider engine (sized by the number of cores it runs on). Sizing challenges relate to the number of decider cores that are used.

One decider core can typically process 600 requests per second with the baseline configuration. Performance is reduced down to 200 requests per second with a typical complex configuration.

Not all requests need to be inspected by the application firewall. With a request mix of 20:1 (20 ‘safe’, 1 inspected), an 8-core decider farm can typically take responsibility for a workload of up to 32,000 requests per second.

Application Firewall (Stingray integrated version) When running on the Stingray Traffic Manager, the application firewall can use up to 8 available cores, allowing the traffic manager to scale to approximately 32,000 requests per second (based on the above analysis) assuming sufficient additional cores are available for the regular traffic management tasks.

1 http://blogs.riverbed.com/stingray/2005/09/tuning-zeus-traffic-manager-for-maximum-performance.html

Page 4: Stingray Traffic Manager sizing guide - ISCGiscg.pl/doc/riverbed/iscg_stingray_traffic_manager_sizing_guide.pdf · SIZING GUIDE: Stingray Traffic Manager 3 Maximum connections In

SIZING GUIDE: Stingray Traffic Manager

4

Performance gotchas

Increasingly-complex stingray configurations will consume additional CPU and may impact top-line performance once the CPU capacity is reached. In practice, if the host server is sufficiently highly specified, there will be plenty of room for growth and these configurations should not significantly affect performance.

Configurations to watch out for include:

IP Transparency Use of the IP transparency module requires the kernel to perform additional work to track and look up connection data. This can reduce top-line performance by up to 30% in extreme cases.

Mitigation: IP transparency is rarely mandatory (there are a number of workarounds that have minimal performance impact). IP transparency is a property of a pool, and can be applied judiciously, just to the transactions that require it. This potentially reduces the top-line performance impact by a significant degree.

Multi-hosted Traffic IP addresses Like IP Transparency, these require a kernel module that performs additional work to filter traffic, and all incoming traffic is sent to all traffic managers in the multi-hosted cluster. This results in an increate in network traffic and an increase in CPU utilization.

Mitigation: Use Multi-hosted Traffic IP addresses only when an active-active configuration is necessary for performance reasons, and ensure that sufficient network bandwidth and CPU power is available.

Compression Compression is an extremely compute-intensive operation and its use will reduce capacity. It is valuable when the bandwidth between the clients and the traffic manager is constrained and forms a performance bottleneck.

Mitigation: Compression is generally only performed on large plaintext responses; there is little benefit in compressing small responses, and no benefit in trying to compress any content that is already compresed (images, videos, pdfs, many office documents).

SSL key size SSL is extremely compute intensive. The computational cost of decrypting using a 2048-bit key is approximately 5 times the cost of using a 1024-bit key.

Advice: SSL session ID reuse will automatically reduce the number of new SSL handshakes (the operation that incurs the computational hit). If you are considering migrating from 1024-bit keys to 2048, use Stingray’s reporting to gauge how many SSL handshakes you currently perform per second.

TrafficScript rules TrafficScript rules incur a computational cost. Simple rules that perform content routing, and inspect and modify HTTP headers have minimal performance impact. Rules that inspect HTTP content (request or response bodies) or that invoke Java Extensions have a more significant performance hit.

Mitigation: Monitor the performance of the rules using Stingray’s Real-Time Analytics capability. Configure and tune rules so that expensive operations are only performed when required.

Application Firewall The application firewall option for Stingray Traffic Manager performs a large number of security tests against requests and responses that the traffic manager processes.

As a rule of thumb, the Application Firewall module can perform between 200 and 600 transaction validations per second per core in the host server, depending on the complexity of the security rules used. Please refer to the analysis above for more details.

Mitigation: An application firewall is used to protect sensitive application servers and the data on those servers. A sensible security architecture would segment the infrastructure by security need and route traffic accordingly – servers hosting applications and sensitive data in one isolated zone, and other basic servers hosting static content and non-sensitive data in a separate zone. It would only be

Page 5: Stingray Traffic Manager sizing guide - ISCGiscg.pl/doc/riverbed/iscg_stingray_traffic_manager_sizing_guide.pdf · SIZING GUIDE: Stingray Traffic Manager 3 Maximum connections In

SIZING GUIDE: Stingray Traffic Manager

5

necessary to perform in-depth application firewalling against the traffic that is routed to the sensitive security zone.

This significantly reduces the number of requests that need to be inspected and secured, without compromising the security of the application.

Heavy use of Caching Stingray can take advantage of all available memory for content caching, and can also use SSD disks in cases where the cacheable content is very large.

Performance disclaimer

Like any other Layer-7 solution, peak performance of a Stingray solution depends on a number of complex factors, including configuration, traffic mix and profile and network conditions.

Riverbed cannot give hard guarantees of the real world performance that a particular customer configuration will deliver, but can assist customers with sizing advice, benchmarking expertise and evaluation licenses to help prove suitability and size configurations it in a real-world deployment.

Riverbed does not endorse any particular hardware vendor; similar performance characteristics are exhibited by similar platforms (servers and blades) from any major vendor.

Further Reading

Stingray Performance Reference guide: http://www.riverbed.com/

Performance tuning recommendations: http://blogs.riverbed.com/stingray/

About Riverbed Riverbed delivers performance for the globally connected enterprise. With Riverbed, enterprises can successfully and intelligently implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery without fear of compromising performance. By giving enterprises the platform they need to understand, optimize and consolidate their IT, Riverbed helps enterprises to build a fast, fluid and dynamic IT architecture that aligns with the business needs of the organization. Additional information about Riverbed (NASDAQ: RVBD) is available at www.riverbed.com.

Riverbed Technology, Inc. 199 Fremont Street San Francisco, CA 94105 Tel: (415) 247-8800 www.riverbed.com

Riverbed Technology Ltd. One Thames Valley Wokingham Road, Level 2 Bracknell. RG42 1NG United Kingdom Tel: +44 1344 401900

Riverbed Technology Pte. Ltd. 391A Orchard Road #22-06/10 Ngee Ann City Tower A Singapore 238873 Tel: +65 6508-7400

Riverbed Technology K.K. Shiba-Koen Plaza, Bldg. 9F 3-6-9, Shiba, Minato-ku Tokyo, Japan 105-0014 Tel: +81 3 5419 1990

©2011 Riverbed Technology. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed Technology. All other trademarks used herein belong to their respective owners. The trademarks and logos displayed herein may not be used without the prior written consent of Riverbed Technology or their respective owners.