Upload
bakirabdel
View
215
Download
0
Embed Size (px)
Citation preview
8/2/2019 AppFlowAccel
1/52
Application Flow AccelerationTechnology
8/2/2019 AppFlowAccel
2/52
Application Flow Acceleration Technology
2 2008 Juniper Networks, Inc. All rights reserved.
What Is AppFlow?AppFlow provides application-level acceleration for three business-critical
applications: Microsofts Common Internet File System (CIFS), Microsoft Exchange
traffic, and HTTP.AppFlow builds on TCP acceleration by anticipating client-server requests,
accelerating data across the WAN, and reducing the number of round-trip times (RTTs)
necessary to transfer data with these applications.
AppFlow Requires TCP AccelerationYou must enable TCP on both the client-side WX platform and the server-side platform
before you can use AppFlow.
8/2/2019 AppFlowAccel
3/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 3
AppFlow Builds on Existing TechnologiesAppFlow is one of a series of optimization technologies the WX platform provides. The
fundamental layer of optimization is the compression capabilities of Molecular
Sequence Reduction (MSR) and Network Sequence Caching (NSC). These WXtechnologies allow you to place more data in the WAN pipe.
The next level of WX acceleration, PFA, optimizes data transfers that rely on TCP as the
transport mechanism. TCP acceleration can reduce the overall number of round trip
requests and responses often necessary for hosts to transmit or receive data using
TCP.
At the top layer of these optimization techniques is the acceleration of specific
applications, which use the following protocols:
Common Internet File System (CIFS): This protocol is Microsofts file
services transport.
Messaging Application Programming Interface (MAPI): This protocol
transfers data between Microsoft Exchange servers and Outlook clients.
Hypertext Transfer Protocol (HTTP): This protocol transfers Web pages to
a clients browser.
Although each of these three protocols uses TCP as its transport mechanism, they
present unique challenges that hamper PFAs efforts to accelerate them. We examine
each of these protocols in more detail over the remainder of this chapter so you can
better understand these challenges and the benefits offered with AppFlow.
8/2/2019 AppFlowAccel
4/52
Application Flow Acceleration Technology
4 2008 Juniper Networks, Inc. All rights reserved.
CIFS Protocol Details: Part 1CIFS is an extremelychatty protocol developed by Microsoft for file services. When you
map a network drive in Windows Explorer or copy and paste a file from a network drive
to your PC, your host uses CIFS.CIFS is a LAN-oriented protocol, originally designed for workgroups and small domains
whose members all reside on the same local network. Local host-to-host transfer of
files, even large ones, on a LAN using CIFS does not affect WAN traffic, for obvious
reasons. However, as Windows domains have grown to span multiple networks in
multiple locations, using CIFS as a means of file transfer can have a tremendous
impact on WAN performance. Users in Windows networks often have numerous drive
mappings to remote file servers, to send and retrieve various document and other
files over WAN connections using CIFS.
CIFS uses a type of ping-pong, request-and-response model between clients and
servers, an architecture that might be acceptable on a LAN but is particularly
inefficient across the WAN. CIFS breaks files into small blocks of data and sends them
one at a time to the destination host. The destination host must acknowledge eachblock of data before the sender sends the next block. Any delay on the WAN therefore
delays each request and response.
8/2/2019 AppFlowAccel
5/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 5
CIFS Protocol Details: Part 2The packet capture on the slide shows some of the request and response process,
which occurs between two hosts transferring a file using CIFS:
The client receiving the file (192.168.1.25) makes a Read AndX(transfer) Request for a block of data from the server sending the file
(192.168.1.44).
The client host must receive the entire block before making a request for
the next block.
This process continues until the file transfer is complete.
The entire transfer of a 1-MB file requires a total of 1639 packets.
Not illustrated in the packet capture is the tremendous amount of overhead required
to begin the actual data transfer. CIFS uses a large number of packets simply to allow
a client access to a shared folder and to view that folders contents. This verification
process must conclude before the client can even begin to download a file. File
transfers using CIFS involve examinations of the clients access privileges, the clients
operating system, the file system of both the client and the server, and a host of
additional details contained within a series of packet exchanges not covered here.
Needless to say, any latency on a WAN link will delay transfers of all of these packets
as well as the actual data itself for clients opening or downloading files from remote
sites using CIFS.
8/2/2019 AppFlowAccel
6/52
Application Flow Acceleration Technology
6 2008 Juniper Networks, Inc. All rights reserved.
CIFS Acceleration: The SolutionCIFS acceleration provides a mechanism for pipelining the data blocks. The WX
devices request data blocks as quickly as possible to fill the available WAN capacity.
Because CIFS handles file sharing and transfers, a WX device knows that when aclient requests the first block of data for a specific file, that client then subsequently
asks for the remaining blocks. The WX device can begin retrieving all data blocks for
the file even before the client requests them.
CIFS acceleration separates each file transfer process across the WAN into three
sessions:
Session 1: Between the requesting client and the local WX platform;
Session 2: Between two WX devices; and
Session 3: Between the WX platform and the sending host.
The WX platform near the sending host issues ACK messages to that host at exactly
the rate needed to fill the WAN pipe with compressed data.
The WX device on the sending hosts side of the transfer then transmits this data
across the WAN to the destination WX device through the transport protocolUDP or
IPComp. Once the data arrives at the destination device, the requesting clients WX
device delivers it to the destination host.
Continued on next page.
8/2/2019 AppFlowAccel
7/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 7
CIFS Acceleration: The Solution (contd.)From a technical standpoint, the client and the host do not actually communicate
directly with one anotheronly with their own local WX devices. However, CIFS
acceleration is completely transparent to client and host machines, similar to TCP
acceleration.
CIFS-Compatible PlatformsCIFS acceleration supports more recent Windows operating systems, including
Windows 2000, XP, Vista, and 2003 servers and clients. CIFS acceleration also
supports Samba v3.0 server for Linux. CIFS does not accelerate traffic from Windows
NT, Windows 95, or older clients and servers.
8/2/2019 AppFlowAccel
8/52
Application Flow Acceleration Technology
8 2008 Juniper Networks, Inc. All rights reserved.
CIFS File Transfer Without AppFlowAs a block-based protocol, CIFS differs from a streaming-based protocol such as FTP.
When a client requests a file transfer using FTP, the FTP protocol serves the entire file
to the client at once, pausing only to wait for acknowledgement packets based on theclients TCP Window size.
CIFS, however, breaks bulk file transfers into many small data blocksfrom 64 KB to
as little as 256 bytesand transmits these blocks serially. The client application and
server exchange messages to acknowledge and verify each block that is transmitted,
and both hosts must wait for an acknowledgement of each block of data before
requesting or transmitting the next. These acknowledgements are in addition to any
similar ones provided by TCP.
8/2/2019 AppFlowAccel
9/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 9
CIFS File Transfer with AppFlowAppFlow accelerates CIFS reads and writes by prefetching blocks of data that the
client will request. Logically, if a client requests the first block of a file, it will
subsequently request all remaining blocks as well. When a CIFS client opens a file, itinitiates a series of read requests. The WX platforms at the remote and central sites
work together to determine that a client is opening a file and, based on the available
WAN bandwidth, request the appropriate number of reads needed to fill the WAN link.
The slide depicts the following scenario:
1. When the client requests the first block of data, WX-2 proactively reads
the rest of the file from the server.
2. WX-2 then pipelines the data back to WX-1.
3. By the time the client requests subsequent data blocks for the remainder
of the file, those blocks are already available on WX-1, which can then
supply those blocks to the client as quickly as possible across the LAN.
The process for CIFS writes is the same, with the WX platforms acknowledging the
appropriate number of writes to keep the WAN link full.
A 1-MB document, for example, would take 20 seconds to copy over a 512-KB link
with 100 ms of latency; the same document would take only 2 seconds with AppFlow.
8/2/2019 AppFlowAccel
10/52
Application Flow Acceleration Technology
10 2008 Juniper Networks, Inc. All rights reserved.
The Client-Side WX Device Performs All CIFS OptimizationsThe optimization work for CIFS is performed on the client-side WX platform, meaning
that all statistics for CIFS acceleration appear on the client sidethe server-side
device will have no CIFS statistics.Note that any device can be a server, client, or both, depending on who is mounting
the file system.
For definition purposes, a client is a host requesting a file, mounting a network drive,
or viewing remote drive contents or other file-services through CIFS. A server in a CIFS
environment is the host that contains the file or owns the file share.
8/2/2019 AppFlowAccel
11/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 11
CIFS Uses SMBCIFS request and response messages use TCP port numbers 139 or 445. CIFS uses
the Server Message Block (SMB) protocol, which originally ran over Microsofts
Network Basic Input Output System (NetBIOS). SMB over NetBIOS uses TCP port 139(also known as native SMB). SMB over TCP/IP uses TCP port 445 (known as naked or
rawSMB transport). Windows 2000 or later and Samba servers use SMB over TCP/IP.
SMB over NetBIOS adds a few more handshakes compared to SMB over TCP/IP. Note
that the request and response architecture forces a round trip for every transaction.
Application DefinitionsThe default CIFS application definition includes both port 445 and port 139. When
creating new CIFS-based definitions, you should specify both ports.
8/2/2019 AppFlowAccel
12/52
Application Flow Acceleration Technology
12 2008 Juniper Networks, Inc. All rights reserved.
CIFS Transaction with Common SMB MessagesThe packet capture on the slide illustrates a simple CIFS conversation between a
server and a client so that the client can map a network drive to the server. Beyond
the simple drive mapping, the client also queried the directory, and then disconnectedthe mapped drive. Notice the SMB messages that are sent back and forth to
accomplish this event. The details of the messages are the following:
Negotiate the Protocol Request: Starts all CIFS client-server
conversations and sets the dialect of CIFS;
Session Setup: Used for authentication of the client-server conversation;
Tree Connect: Connect to a share;
Trans2 Request and Trans2 Response: Often used to view attributes of
the file; and
Tree Disconnect: Disconnect from a share.
Other frequently used commands include the following:
NT Notify: Indicates changes on the server;
NT Create AndX: Open or create a file;
Write AndX (WriteX): Write data to the server; and
Read AndX (ReadX): Read data from the server.
Continued on next page.
8/2/2019 AppFlowAccel
13/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 13
CIFS Transaction with Common SMB Messages (contd.)WriteX and ReadX are the two most useful commands for acceleration, because these
commands occur in file transfers when performing a file copy maneuver. CIFS uses
ReadX and WriteX SMBs to receive or send data (always from the client perspective) to
and from the server. ReadX and WriteX requests include the file ID, which data blocks
to push or pull, and where in the file those blocks reside.
8/2/2019 AppFlowAccel
14/52
Application Flow Acceleration Technology
14 2008 Juniper Networks, Inc. All rights reserved.
SMB Secures CIFS TransactionsSMB signing attaches a hash value to a packet so that the receiving host can verify
that a packet coming from a server has not changed while en route. SMB signing
helps reduce the chance ofman-in-the-middle attacks. SMB signing was first availablein Windows NT 4.0 Service Pack 3 and Windows 98. SMB signing has since been
included in Windows Server 2003, Windows XP, and Windows 2000 to stop such
man-in-the-middle attacks. These operating systems also support message
authentication between clients and servers by placing a digital signature into each
SMB; both the client and the server verify this signature.
To use SMB signing, you must either enable it or require it on both the SMB client and
the SMB server. If SMB signing is enabled on a server, clients that are also enabled for
SMB signing use the packet signing protocol during all subsequent sessions. If SMB
signing is required on a server, a client is not able to establish a session unless it is at
least enabled for SMB signing.
If this policy is set as required on the server, an SMB client must sign the packets. If
this policy is disabled, the policy does not require the SMB client to sign packets.
8/2/2019 AppFlowAccel
15/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 15
Support for SMB SigningWXOS software Release 5.5 (and later) supports SMB signing by logging in to a server
with a pre-existing user account. You must create this account on the server (or the
domain) and assign appropriate privileges to files, directories, and resources just asyou would an individual user who will need access to those resources. This setup
allows the WX device to simulate a users access and apply CIFS acceleration to
SMB-signed files.
However, the WX platform does not support SMB2, the preferred version of the
protocol used by Windows Vista hosts. For transactions between Vista hosts and
non-Vista hosts, the WX platform downgrades the version to SMB (as part of the
negotiation process between hosts using CIFS). The WX device cannot apply CIFS
acceleration between two Vista hosts (because they will default to SMB2) and such
traffic is simply passed through unaccelerated.
SMB Signing Must Be Disabled (or Enabled but Not Required)For any version of OJWX software prior to Release 5.5, you must disable SMB signing
on hosts using CIFS, or select set the option to enable but not require signing. For a
more in-depth explanation of SMB signing, refer to Microsofts Knowledge Base at
http://support.microsoft.com/default.aspx?scid=kb;en-us;887429, Overview of
Server Message Block signing, Article ID 887429.
8/2/2019 AppFlowAccel
16/52
Application Flow Acceleration Technology
16 2008 Juniper Networks, Inc. All rights reserved.
SMB Signing: ExampleThe slide shows an Ethereal packet capture of the SMB protocol with SMB signing
enabled. When SMB signing is disabled, the Signature field contains all zeros.
8/2/2019 AppFlowAccel
17/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 17
AppFlow Must See the Flow to Accelerate ItAppFlow cannot accelerate a traffic flow unless the WX platform sees the start of the
flow. By default, the flow-reset mode is on so that the WX device resets eligible CIFS
traffic flows if the WX device receives a packet for the flow within 900 seconds (15minutes) of the tunnel establishment time. This timing allows the WX device to
accelerate CIFS traffic flows each time it is restarted.
You can configure flow reset through the command line, and you can set it per tunnel
or globally.
The following commands control flow resets:
flow-reset start duration seconds (5 to 86400)flow-reset stopshow flow-reset [configuration | status]
For more information on flow reset, refer to the WX/WXC Operators Guide.
8/2/2019 AppFlowAccel
18/52
Application Flow Acceleration Technology
18 2008 Juniper Networks, Inc. All rights reserved.
Centralized Exchange and the WANIn a centralized environment with all users in a single location, Microsofts Exchange
application performs well because communications take place over the LAN, where
redundant transmissions do not matter. Once the enterprise extends to includeremote locations, however, Exchanges inefficiencies begin to affect performance,
placing a significant burden on WAN resources and slowing user productivity.
In the example on the slide with servers in a centralized location, Exchange sends the
exact same e-mail from the company CEO to each recipient in all remote offices,
consuming bandwidth across each of those WAN connections.
Additionally, Exchange clients like Outlook and Outlook Express issue various
repetitive transmissions to synchronize accounts, particularly when a user closes
Outlook. Not only does this process monopolize WAN bandwidth, often at the expense
of more critical applications, but it also reduces productivity with long delays and by
freezing user desktops while downloading or sending messages, opening
attachments, or shutting down and synchronizing Exchange and Outlook.
8/2/2019 AppFlowAccel
19/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 19
Distributed Exchange and the WANBusinesses have addressed these problems with a decentralized architecture
deploying Exchange servers at remote sites so e-mail messages can be stored and
accessed locally.Using the previous example, under the distributed model, the CEO would send the
message to the central office Exchange servers. Each Exchange server in the remote
office would need to retrieve the message only once and then distribute it locally to
each Outlook client over the LAN.
This architecture dramatically reduces WAN traffic, not to mention the impact of
latency. Over time, however, the widespread deployment of Exchange servers has
created significant management problems. Patches, upgrades, and updates are
difficult to perform on distributed servers, and regulatory requirements add to the
complexity. The Sarbanes-Oxley act, for example, requires companies to archive e-mail
for a minimum of five years. Backing up multiple distributed Exchange servers poses a
tremendous administrative challenge. As IT department budgets shrink, the
decentralized Exchange architecture has become a bit of a nightmare to maintain forlarge companies.
Centralizing Exchange servers in a single facility greatly reduces management and
administrative complexity. Patches, upgrades, and backups are easy to perform, and
the business requires fewer servers, reducing capital expenses and further easing the
IT departments administrative burden. For companies that have completed this
centralization, however, the old WAN-performance problems have returned;
businesses are back where they started.
8/2/2019 AppFlowAccel
20/52
Application Flow Acceleration Technology
20 2008 Juniper Networks, Inc. All rights reserved.
Exchange Acceleration: The ProblemMicrosoft Exchange suffers from the same problems as CIFS. Exchanges uses MAPI
(TCP port 135), which is a chatty protocol. MAPI uses a request-and-response model
between clients and servers, breaking files into small blocks of data and sendingthem one at a time to the destination host. The destination host must acknowledge
each block of data before the next block can be sent, creating a ping-pong effect. This
inefficiency results in poor performance, even on low-latency links.
8/2/2019 AppFlowAccel
21/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 21
Exchange Acceleration: The SolutionSimilar in nature to CIFS acceleration, Exchange acceleration provides a method for
accelerating bulk writes and reads by sending as many writes and reads in quick
succession to the local WX device as needed to fill the available WAN capacity.The WX platform near the source sends ACK messages to the source at exactly the
rate needed to fill the WAN pipe with compressed data. The source WX device then
sends the data across the WAN to the destination WX device using UDP or IPComp in
its service tunnel. Once the data arrives at the destination device, the WX device can
deliver it to the destination hosts at LAN speed.
Best in Centralized DeploymentsWhile the WX platform works within a distributed Exchange server environment,
Exchange acceleration works best in a centralized deployment with the clients working
in online mode. The next slide elaborates on this topic.
The WX platform accelerates traffic between all the Outlook clients and Exchange
servers, with one exception: Exchange acceleration does not support mail traffic
between Outlook 2003 clients and Exchange 2003 servers. The same is true for
Outlook 2007 clients and Exchange 2007 servers. However, the WX platform can still
optimize such traffic by accelerating it with TCP acceleration and compressing it using
MSR or NSC. Microsoft offers an acceptable means of accelerating Exchange 2003
servers and Outlook 2003 clients.
Continued on next page.
8/2/2019 AppFlowAccel
22/52
Application Flow Acceleration Technology
22 2008 Juniper Networks, Inc. All rights reserved.
Best in Centralized Deployments (contd.)Microsoft also offers a less-than-optimal solution to accelerate mail traffic between
older versions of Exchange and Outlook clients. Exchange acceleration on the WX
platform offers a better solution, provided you disable Microsofts mail compression
on each Exchange server. To perform this task, you must edit the servers registry,
locate HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\
MSExchangeIS\ParametersSystem , and change the Rpc Compression
Enabled value to 0 (zero). You must reboot the Exchange server to disable
compression.
8/2/2019 AppFlowAccel
23/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 23
Exchange and Outlook Communication ModesLet us take a closer look at how Exchange and Outlook operated prior to 2003. While
working in online mode on the LAN, users saw no problems because bandwidth was
plentiful and latency was minimal.Users worked in offline mode usually because they were traveling and needed their
e-mail stored on their local disks. The only disadvantage to this mode was the lack of
access to free and current information while scheduling meetings and reading or
responding to their latest e-mail. Also, users needed to wait until they could get back
into online mode to upload any e-mail responses they might have created while
offline.
Continued on next page.
8/2/2019 AppFlowAccel
24/52
Application Flow Acceleration Technology
24 2008 Juniper Networks, Inc. All rights reserved.
Exchange and Outlook Communication Modes (contd.)To resolve these issues and deliver acceptable performance for users, Microsoft
introduced Exchange 2003. Touted as a more WAN-friendly solution, Exchange 2003,
paired with Outlook 2003 clients, uses caching to pull information once, in the
background, and store it on recipients desktops, eliminating multiple transmissions.
This modificationincluding a default setting for Outlook to automatically download
new messages and attachments from the Exchange server immediately upon
notificationimproves performance significantly, at least for users. Delays and
desktop freezing problems have essentially disappeared. IT departments avoid the
need for remote Exchange servers, and e-mails are not sent repeatedly across the
WAN.
At first glance, the Exchange problem seems to be solved. The IT staff responsible for
managing WAN resources, however, sees things differently because while Exchange
2003 reduces the impact of latency for remote users, it actually places additional and
unforeseen burdens on the WAN.
Before Exchange 2003, Outlook waited for the user to take some sort of action, such
as opening an e-mail, before downloading the file from the central Exchange server.
Using our example of an e-mail from the CEO to all employees, the transmission of the
data would typically span several hours, because Outlook would retrieve the message
as each employee opened the e-mail at various times throughout the day.
However, with Exchange 2003, by default, each recipients Outlook application
automatically downloads the message and attachment and stores them locally. This
process occurs without any user intervention. Consequently, assuming no users
change the default setting, Exchange servers still send the message out to all
recipients immediately. In a centralized architecture, this action imposes up to 100
times the load on WAN links, reducing the bandwidth available for more critical
operations.
8/2/2019 AppFlowAccel
25/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 25
Exchange InefficienciesThe slide illustrates four different ways that a client can launch a file download from
pre-Outlook 2003. Consider a simple e-mail exchange between coworkers. When the
recipient opens a message, the Exchange server sends the message content (and anyattachments) to the users Outlook client for viewing, although the message itself
remains on the Exchange server. If the user closes the application, Outlook initiates a
synchronizing process, pulling a copy of the message and its attachments from the
server and storing it on the users hard drive for later viewing offline. The same
message, therefore, is sent twiceonce for the original viewing and again for the
synchronizing process, consuming twice the bandwidth.
Exchange compounds this problem with the Sent Items folders. Because most users
leave the default setting to copy all messages to that folder, the Outlook client sends
the entire message again to the Exchange server to keep the Sent Items folders in
sync between the server and the client. Therefore, a single message read and replied
to by a single recipient could cross the network four times. Multiply this redundancy by
hundreds or thousands of messages per day for a large organization, and add large
attachments to the mix, and you can easily see how quickly e-mail can consume
bandwidth.
Continued on next page.
8/2/2019 AppFlowAccel
26/52
Application Flow Acceleration Technology
26 2008 Juniper Networks, Inc. All rights reserved.
Exchange Inefficiencies (contd.)This bandwidth consumption is truly an issue with versions prior to Exchange 2003.
Exchange 2003 was a step in the right direction, dramatically improving performance
for end users. However, upgrading to Exchange 2003 is not practical for everyone. As
with any application, this upgrade is a majorand disruptiveprocess that requires
tremendous planning and preparation. For many companies, the benefits are not
worth the effort.
For organizations that have made the transition, Exchange 2003 has not proven to be
a universal cure. WAN utilization issues still exists in Exchange 2003, often canceling
out performance improvements. Regardless of the Exchange version in use,
customers need a solution to the WAN problem.
8/2/2019 AppFlowAccel
27/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 27
MAPI Is Inherently InefficientWith the increased use of media-rich messages, users can become understandably
frustrated when retrieving 1-MB messages; the process can seem to take forever. In
addition to simple file attachments, many e-mail messages contain HTML formattingand objects, further delaying delivery.
Rather than sending the entire attachment or the embedded HTML objects
simultaneously, MAPI divides the attachment into data blocks that vary from 8 KB to
16 KB. For each data block sent, MAPI requires a reply from the recipient before
sending the next block. This ping-pong behavior, like that of CIFS, means that a 1-MB
message could require as many as 100 to 150 RTTs. Waiting for these serial RTTs to
complete, users feel the impact of even modest latency; as little as 30 ms of delay
dramatically increases the time needed to retrieve messages.
8/2/2019 AppFlowAccel
28/52
Application Flow Acceleration Technology
28 2008 Juniper Networks, Inc. All rights reserved.
Exchange and Outlook Network UtilizationExchange suffers from two distinct issues that can impact efficient transmission of
data.
The first issue is the number of times Exchange transmits a particular attachment tothe client during a specific set of operations. Using MAPI and other complex protocols
to communicate with Outlook clients, the Exchange server requires that both e-mail
messages and attachments be sent back and forth repeatedly between the server
and each recipient during the reading, confirmation, and synchronization processes.
The next page discusses the second form of inefficiency.
8/2/2019 AppFlowAccel
29/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 29
Exchange Receive Without AppFlowThe second form of inefficiency for Exchange is the exact same form of inefficiency
from which CIFS suffers. Exchange uses a block-based approach to transfer large files.
Therefore, when an Outlook client downloads an e-mail with an attachment, theattachment does not transfer all at once; rather, Exchange transmits in small blocks,
one at a time.
8/2/2019 AppFlowAccel
30/52
Application Flow Acceleration Technology
30 2008 Juniper Networks, Inc. All rights reserved.
Exchange Receive with AppFlowThe WX platform improves Exchange performance through a combination of AppFlow
features designed to accelerate the MAPI protocol. PFA ensures that TCP does not
become the bottleneck, and NSC eliminates the repeated transmission of datacontained within Exchange messages.
With the AppFlow feature, the WX devices send multiple data blocks in quick
succession without waiting for acknowledgements for each block. AppFlow handles
Exchange receive and send operations similarly. When an Exchange client wants to
read an attachment from the Exchange server, the client requests the first data block.
Working together, the WX devices on both ends of the WAN link determine that the
client has begun to download an attachment.
The WX devices then make the appropriate number of receive requests to fill the
available WAN capacity. By the time the Exchange client requests the subsequent data
blocks, the WX devices have already transmitted and received the data. The
client-side WX platform can now forward the data at LAN speeds to the client.
Compression complements the AppFlow feature by identifying and eliminatingrepeated data sequences that have already crossed the WAN.
8/2/2019 AppFlowAccel
31/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 31
Accelerating Web Applications: The ProblemHTTP has become extremely popular as many companies look to transition their
thick-client applications, such as SAP or Oracle, to a Web interface. Web interfaces are
easier to manage from an IT departments perspective because every computer has abrowser.
The only difficulty this approach presents is that Web-based applications are often
bandwidth intensive. In fact, a Web-based application can require ten times more
bandwidth to perform the same functions as some older, client-server versions.
Latency can dramatically affect HTTP, just as it affects Exchange and CIFS. HTTP is a
block-oriented protocol driven by client requests and server responses. Complex Web
pages can contain dozens of embedded objects, all of which can require large
amounts bandwidth to retrieve. Although Web browsers typically have caches built in
to help reduce the amount of bandwidth required to display a particular page, these
caches do not necessarily assist in networks with high latency.
8/2/2019 AppFlowAccel
32/52
Application Flow Acceleration Technology
32 2008 Juniper Networks, Inc. All rights reserved.
Accelerating Web Applications: The SolutionHTTP acceleration provides users the fastest response times when they access their
Web applications.
With WXOS Release 5.7, the WX platform acts as a transparent proxy, requesting andretrieving Web-based content through service tunnels from Web servers behind other
WX devices. Because the cached objects reside on the WX device closest to the
clients, the WX platform can send those items from its cache to Web clients at LAN
speeds.
Just as we have seen with TCP acceleration, HTTP acceleration divides the
client-server conversation into three separate sessions:
Client-to-local WX;
Local WX-to-server-side WX; and
Server-side WX-to-Web server.
The ResultFrom the users perspective, the transaction appears to be a normal end-to-end HTTP
session, but with much quicker download times.
8/2/2019 AppFlowAccel
33/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 33
Cache Server BehaviorProxy devices act as middle-men between Web clients and Web servers. This setup
allows a proxy to retrieve and store objects from Web servers so it can serve those
objects more quickly to local LAN clients, thus speeding up Web response times.The caching process in the slide follows this general logic:
1. A LAN Web client makes a request for www.bnn.com.
2. The proxy server forwards this request to the Web server for
www.bnn.com.
3. The Web server replies with page content and all objects for the page.
4. The proxy server stores those objects that are valid for caching (as
indicated by the Web server in the response).
5. The proxy server returns content to Web client.
6. Other LAN Web clients request same content (www.bnn.com).
7. The proxy server provides objects from cache without making the same
request to the original Web server.
A number of factors determine which objects a caching server can and cannot (or
should not) store locally. Various parameters within the Web servers reply help a
cache server verify that the objects it has stored locally are still current before handing
them out to other Web clients on the LAN. We examine these factors (like
pragma-nocache and If-Modified-Since replies) on the next several slides.
8/2/2019 AppFlowAccel
34/52
Application Flow Acceleration Technology
34 2008 Juniper Networks, Inc. All rights reserved.
Browsers Can Store ContentMost current Web browsers provide an internal cache that allows them to store
previously retrieved objects on the local hard drive. If the user later visit the same site,
the browser can retrieve these stored objects from its local cache without having toretrieve them from remote servers. The local cache feature of browsers can
dramatically improve responses, as long as those stored objects are still valid.
HTTP, and more specifically HTML, includes a blinding array of instructions that a
browser must use to determine whether a locally cached object is still fresh. In a
simplified manner, a browser checks with a Web server before it displays the contents
of a cached Web page to make certain that the page has not changed. If the page has
changed since the browsers last visit, the server instructs the browser to download
the page again. If the page has not changed, the server issues a specific HTTP
response code to the browser, indicating that there have been no changes.
Users Can Modify Browser Caching BehaviorYou can define how often the browser cache checks with any Web server to verify the
integrity of cached objects. The browser can check on every visit to the page, every
time you launch the browser, never, or automatically (this is typically the default
setting).
8/2/2019 AppFlowAccel
35/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 35
Embedded ObjectsInitial responses from a Web server to a client browser include the HTML code that
defines the layout of the Web page. Included (or embedded) within that code are the
locations for all objects that the browser needs to display.These embedded objects can include a wide variety of items that might reside on the
Web server itself. Images, style sheets, documents, and Java scripts are just a few
examples of embedded objects. The src tag and value instructs the clients browser
where a given object resides and, therefore, how to retrieve and display it on the page.
8/2/2019 AppFlowAccel
36/52
Application Flow Acceleration Technology
36 2008 Juniper Networks, Inc. All rights reserved.
Is an HTTP Object Fresh?Theoretically, a browser could cache an object locally forever, unless method existed a
to ensure that the browser displays only the most current (or fresh) version of an
object. Without some sort of update mechanism, you could visit a Web site that haschanged, but your browser might display objects that it cached three months ago.
Fortunately, HTTP provides a way for a browser to check an objects freshness without
having to download the entire object again. The browser performs this check by asking
the server if the object has changed since the last time the browser downloaded it.
Using an if-modified-since request, a browser provides the server a date and time
(typically, the last time the browser downloaded the object).
If the Web server determines that the object has changed since that date, the server
instructs the browser to download the most recent copy of the object. If the Web
server determines that the object has not changed, it returns a response header to
the browser indicating that the object in cache is still current (and therefore the
browser can display the cached object to the user).
8/2/2019 AppFlowAccel
37/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 37
Cache Server BehaviorNontransparent Versus TransparentGenerally speaking, there are two different types of caching serversnontransparent
and transparent. While the overall function of both types is essentially the same, the
nature of their network behavior differs significantly. Regardless of how a proxy cachedevice operates, you can accurately consider both transparent and nontransparent
proxies server as middle-men between clients and external Web servers. The TCP
conversation between the client and the proxy is completely separate from the
conversation between the proxy and the Web serverin fact, they are two different
TCP sessions altogether.
If you have ever configured your Internet browser to use a proxy server, the device in
question was a nontransparent proxy. When your browser makes a request for a Web
page, the destination IP in the request is that of the nontransparent proxy. Included in
your browsers request is information that tells the proxy the Web site you want to see.
The slide illustrates the Layer 3 and Layer 4 network translations that occur between a
client, a nontransparent proxy, and a Web server.
Note that the nontransparent proxy uses a completely different IP address and source
port from that of the client. Upon receipt of the Web servers response, the proxy
server reverses this translation before sending the reply to the client.
8/2/2019 AppFlowAccel
38/52
Application Flow Acceleration Technology
38 2008 Juniper Networks, Inc. All rights reserved.
Transparent ProxyThis type of proxy server does not require any special browser configuration because it
is, as the term implies, transparent to network devices, including your browser. Often
deployed physically in the path of traffic, transparent proxies can intercept Webrequests from clients and return cached objects to them without retrieving those
objects from Web servers.
The illustration on the slide shows that a transparent proxy server does not alter the
clients source IP or source port before forwarding that request to the Web server.
Therefore, the proxy does not need to alter the destination IP or destination port in the
Web servers reply to the client.
These two behaviorstransparent and nontransparentare simply general
descriptions of how proxy servers can operate. It might be difficult to categorically
describe a given vendors caching solution as one or the other because many caching
servers can operate transparently for some types of traffic and nontransparently for
others.
We offer the explanations here to help you better understand how the WX device
operates when you deploy it for HTTP acceleration.
8/2/2019 AppFlowAccel
39/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 39
The WX: A (Mostly) Transparent Proxy ServerIn most respects, the WX platform operates as a transparent proxy server when you
implement HTTP acceleration. Fortunately, you do not need to reconfigure each
browser on the LAN to access a proxy server, nor does the WX platform alter thesource IP of the client before forwarding the request to the Web server.
However, the WX platform does alter the clients source port. For purposes we cover in
the following slides, the WX platform adds 10,000 to the clients original source port
before forwarding the request to the Web server. Upon receipt of the servers reply, the
WX platform subtracts 10,000 from the servers destination port so the client receives
the appropriate response.
There are still three completely different TCP sessions involved when deploying the WX
platform for HTTP acceleration:
Client-to-WX;
WX to WX; and
WX-to-Web server.
Continued on next page.
8/2/2019 AppFlowAccel
40/52
Application Flow Acceleration Technology
40 2008 Juniper Networks, Inc. All rights reserved.
The WX: A (Mostly) Transparent Proxy Server (contd.)Note that you must deploy WX devices in pairs to take advantage of the WX platforms
HTTP acceleration feature. Although only the client-side WX device actually applies
caching and acceleration for HTTP, it can do so only for traffic destined to Web servers
located behind remote WX devices. This distinction is significant. The WX platform will
not operate as a traditional caching server (either transparently or not) for traffic
destined to sites other than those on remote subnets. If a client makes a request for
content from a Web site not behind a WX device within the same community, the
client-side WX device simply treats that traffic as it does all other uncompressed
traffic and forwards it.
8/2/2019 AppFlowAccel
41/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 41
Why 10,000?The TCP session between the WX device and an external Web server is separate from
the initial conversation between the Web client and the WX device. However, the
sessions are very similarso much so that should packets from the serverinadvertently reach the client directly (that is, if they somehow bypass the WX device),
the client might actually begin an ACK storm.
TCP RequirementsThe nature of TCP requires that any receiving host acknowledge all data received from
a sending host. If a client receives a packet with a sequence number it has already
received, the client issues an ACK for that packet and all subsequent packets it has
received alreadytherefore, the potential for an ACK storm exists if a client receives
an original-looking packet directly from the server.
The WX platform therefore adds 10,000 to the clients source port so that any direct
replies from the server to the client will not match those response packets the clientexpects. If one of these modified packets reaches the client from the server, the client
simply issues a RST (reset) to the server and drops the packet.
You might wonder how a well-designed network can allow any responses to bypass an
inline device like the WX platform; however, in networks with multiple paths, the
scenario is possible. Additionally, if a WX device suddenly switches-to-wire, all
responses from a Web server passes through the WX device unmodified to the client,
potentially causing an ACK storm.
8/2/2019 AppFlowAccel
42/52
Application Flow Acceleration Technology
42 2008 Juniper Networks, Inc. All rights reserved.
HTTP Acceleration SummaryHTTP acceleration operates by caching static objects (.css, .gif, .js, .jpg). HTTP
acceleration requires two WX or WXC platforms on each side of the connection (that is,
the WX platform is not a traditional, single-sided cache). The WX and WXC platformsdo not cache requests and responses with unknown headers.
Cookies can contain any arbitrary information the server chooses and are used to
introduce state into otherwise stateless HTTP transactions. Without cookies, each
retrieval of a component of a Web page from a Web site is an isolated event.
The WX and WXC platforms do not cache responses from sites that use cookies or
session identifiers. However, the platforms still maintain the URL object database.
8/2/2019 AppFlowAccel
43/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 43
CIFS Acceleration RequirementsYou must configure compression, QoS, and TCP acceleration on both ends of the
tunnel that will carry CIFS-accelerated traffic.
For WXOS releases earlier than 5.5, turn off SMB signing on both the server and theclient. Also, ensure that the end hosts are running CIFS-compatible operating systems.
Configuring CIFS AccelerationCIFS acceleration is enabled by default. The only step you must complete is to confirm
those CIFS applications that the WX platform will accelerate.
8/2/2019 AppFlowAccel
44/52
Application Flow Acceleration Technology
44 2008 Juniper Networks, Inc. All rights reserved.
CIFS Acceleration SettingsCIFS acceleration is enabled by default.
If you do not want to use SMB signing if servers do not require it, you can enable that
option.
If you want to use CIFS with SMB signing, you must select that option and configure
the appropriate account information. The account you specify should be preconfigured
on servers (or domain controllers) for the WX platform itself.
For WXOS releases prior to 5.5, the WX platform can accelerate only CIFS traffic that
does not use SMB signing. Only Release 5.5 and later support signing.
Note that the user account information you supply in this configuration screen must
have sufficient rights to access any and all server or domain resources (for example
files, documents, and directories) that individuals might need to retrieve. For further
information about the details of this account, see the WX/WXC Operators Guide.
8/2/2019 AppFlowAccel
45/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 45
Exchange Acceleration RequirementsYou must configure compression, QoS, and TCP acceleration on both ends of the
tunnel that will carry Exchange-accelerated traffic.
When pairing Outlook 2003 or 2007 with Exchange 2003 or 2007, remember that theWX and WXC platforms can perform only compression on this traffic because
Microsoft changed its procedure for MAPI RPCs for Exchange and Outlook, making it
difficult to accelerate the traffic.
Also, you must verify that no pre-existing Outlook processes exist before the service
tunnel is initiated (for example, synchronization).
8/2/2019 AppFlowAccel
46/52
Application Flow Acceleration Technology
46 2008 Juniper Networks, Inc. All rights reserved.
Configuring Exchange AccelerationExchange acceleration is enabled by default. The only step you must complete is to
confirm those applications the WX platform will accelerate.
8/2/2019 AppFlowAccel
47/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 47
HTTP Acceleration RequirementsYou must configure compression, QoS, and TCP acceleration on both ends of the
tunnel that will carry HTTP-accelerated traffic.
Remember that HTTP traffic must be unencrypted. You should define HTTP applicationdefinitions as specifically as possible (for example, specify TCP ports).
If you want to use HTTP acceleration in both directions (for example, you have Web
servers and Web clients at both locations), you should configure both WX platforms as
client-side devices.
Note that the WX platform will not accelerate HTTP traffic if there is a proxy server
between the server-side WX platform and the actual HTTP server. However, if the proxy
server is located between the Web client and the client-side WX platform, the WX
platform can accelerate HTTP traffic.
8/2/2019 AppFlowAccel
48/52
Application Flow Acceleration Technology
48 2008 Juniper Networks, Inc. All rights reserved.
Configuring HTTP AccelerationHTTP acceleration is not enabled by default so you must manually enable it and select
the application.
8/2/2019 AppFlowAccel
49/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 49
Viewing AppFlow ResultsThe slide shows a screen capture of the monitoring report for CIFS acceleration. One
of the differences between monitoring CIFS acceleration and monitoring other
features of the WX platform is that CIFS acceleration displays time saved rather thanthe increase in effective bandwidth.
8/2/2019 AppFlowAccel
50/52
Application Flow Acceleration Technology
50 2008 Juniper Networks, Inc. All rights reserved.
Troubleshooting AppFlow: Part 1The slide lists several common areas to examine when troubleshooting AppFlow.
8/2/2019 AppFlowAccel
51/52
Application Flow Acceleration Technology
2008 Juniper Networks, Inc. All rights reserved. 51
Troubleshooting AppFlow: Part 2The slide lists several common areas to look at when troubleshooting AppFlow.
8/2/2019 AppFlowAccel
52/52
Application Flow Acceleration Technology