44
Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air Report on PWICE Project 2005 Remote Maintenance & Training System and Aircraft Data Rapid Acquisition System TEAM Pratt & Whitney Paul Scheid, David Loda, Alexey Ouzounov, Vladimir Baranov, Rork Brown, Tia Cummings, Ankit Desai, Andrew Brenner, Mikhail Murshak, Campbell Kennedy John Grady, Matt Zajac, Alex Kuznetsov, Charlene Rakyta, Gary Goodrich, Franklin Davis Integrated Media Systems Center Shri Narayanan, Roger Zimmermann, Ahmed Helmy and Christos Papadopoulos, Beomjoo Seo, Min Qin, Scott Millward, Panos Georgiou Inha University Yoo-sung Kim, Jaehyun Park, Joong Han Yoon, Sang Hun Kwak, KyongI Ku, Hayoung Seo, Junehyung Park, Ju-sung Kim Korean Air C.H. Lee, C. Kim, H.M Kwon, K.B. Cho, K.H Lim, H.J. Kim, S.H. Cho, W.S. Jeon February 17, 2006 This report is submitted to the Pratt and Whitney Institute for Collaborative Engineering (PWICE), detailing the results of PWICE Project 2005, titled © PWICE, 2006 1

PWICE_ProjectReport_2005-2.doc

  • Upload
    ronny72

  • View
    1.135

  • Download
    0

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Report on PWICE Project 2005

Remote Maintenance & Training System andAircraft Data Rapid Acquisition System

TEAM

Pratt & WhitneyPaul Scheid, David Loda, Alexey Ouzounov, Vladimir Baranov, Rork Brown, Tia Cummings, Ankit Desai, Andrew Brenner, Mikhail Murshak, Campbell Kennedy John Grady, Matt Zajac,

Alex Kuznetsov, Charlene Rakyta, Gary Goodrich, Franklin Davis

Integrated Media Systems CenterShri Narayanan, Roger Zimmermann, Ahmed Helmy and Christos Papadopoulos, Beomjoo Seo,

Min Qin, Scott Millward, Panos Georgiou

Inha UniversityYoo-sung Kim, Jaehyun Park, Joong Han Yoon, Sang Hun Kwak, KyongI Ku, Hayoung Seo,

Junehyung Park, Ju-sung Kim

Korean AirC.H. Lee, C. Kim, H.M Kwon, K.B. Cho, K.H Lim, H.J. Kim, S.H. Cho, W.S. Jeon

February 17, 2006

This report is submitted to the Pratt and Whitney Institute for Collaborative Engineering (PWICE), detailing the results of PWICE Project 2005, titled “Remote Maintenance & Training System and Aircraft Data Rapid Acquisition System”.

Since this report contains proprietary data of Korean Air and Pratt & Whitney, no part of this report can be copied and/or reproduced in any kind of media (method) without written permission from Korean Air, Pratt & Whitney, and PWICE in accordance with the “PWICE Institute Agreement” and the “PWICE Development Agreement”.

© Copyright by PWICE, 2006. All Right Reserved.

© PWICE, 2006 1

Page 2: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

EXECUTIVE SUMMARY

The mission of this project is to develop and implement advanced information processing and communication technologies that directly target aviation operations and maintenance. Specifically, the project focuses on the areas of remote maintenance and training for Korean Air’s activities, and specific benefits that can be realized using wireless and distributed inspection and maintenance support, including situation analysis, data acquisition, technical guidance and authorization with the ultimate objective is to save maintenance cost and time while maximizing user experience.

The project in 2005/2006 will leverage and build upon the results achieved by the PWICE team in 2003 and 2004, which included a field test of an initial wireless portal-based field engine inspection system prototype, an on-board microserver architecture design and a prototype high-definition (HD) remote collaboration system. Supporting and longer term tasks included research in wireless networks, device and base-station antenna design and application specific compression. The present project is a continuing effort to achieve a near-term goal that can be used for Korean Air as well as other business opportunities, while supporting continuing fundamental research advances.

During the PWICE board meeting on 1 February 2005, the PWICE project team presented an executive-level outline of the project plan for the years 2005/2006. This plan was then refined and revised by all parties, and a joint meeting was held on Friday, 22 April 2005, at Korean Air headquarters, Incheon. This document summarizes the meeting results and the specific goals that are proposed for 2005/2006. The proposed project will focus on two areas: Remote Maintenance & Training and an Aircraft Data Rapid Acquisition System. The goals of each effort are as follows:

Remote Maintenance & Training System: Emphasis in the first year (2005): on infrastructure and maintenance scenarios; emphasis

during the second year (2006) will be expanded to include training. Design and development of a remote maintenance (conferencing) system for use between the

KAL main bases (GMP-ICN-PUS); leveraging IMSC’s Remote Media Immersion (RMI) technologies to enhance user experience in remote collaboration through Internet based communication; incorporation of user studies involving collaborative problem solving between remotely located partners; use for maintenance training of remote local station in foreign country.Challenges: because of low bandwidth between GMP-PUS, GMP-PUS or ICN-PUS the current prototype system is not suitable for that communication link. The bandwidth required for the current prototype system is 20 Mbps (the bandwidth between GMP-PUS is E1, i.e., 2 Mb/s). We will investigate if a quality improvement of the existing conferencing system is possible.

Design and integration of a next generation wireless portal that leverages high-definition media streaming from portable devices via enhanced wireless networks; research studies and field tests on the feasibility and applicability of high-definition streaming to maintenance tasks directly at the aircraft; exploration of the capabilities and range of current generation wireless networks.

Room setup at KAL (room engineering requirements).

© PWICE, 2006 2

Page 3: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Hardware specifications were discussed at the 22 April 2005 meeting. Each conferencing station requires an HD camera, a computer, a display, speakers and a microphone. The milestones given below are contingent on the necessary hardware being available in the proposed testing locations (USC, Inha University and KAL).

Milestones for 2005: Demonstration system for next generation remote conferencing system portal, results on collaborative problem solving with enhanced portal communication capabilities.Detailed schedule of project:o Initial test between Inha and USC: end of July 2005o Set up and test at KAL: end of September

(contingent upon the equipment being ready)o Completion (including wireless connection): December, 2005

(contingent upon suitable wireless equipment (access point) being available)o Prepare user study results and develop next generation requirements (PWICE, KAL).

Placement of this project as an action item on the 2006 board meeting agenda.

Aircraft Data Rapid Acquisition System: QAR(Quick Access Recorder) stores most of flight data that are important information to

manage a fleet as well as each aircraft. Current operation to retrieve QAR data totally relies on manual operations, which sometime takes a week. To develop a rapid acquisition system of an aircraft data that retrieves and processes aircraft data automatically is the goal of this project.

Emphasis in the first year (2005): on analyzing the current operation in QAR data processing; implementing a prototype system for the new QAR system that KAL will implement in 2005.

Design and development of a prototype system that consists of a wireless terminal and QAR database server; a wireless terminal reads QAR data and delivers them to QAR database server over a secure channel. QAR data server stores all of the QAR data and processes them for the maintenance.

Milestone: Design and integration of a prototype system including a terminal and a database server; research studies and field tests on the feasibility and applicability of secure data encryption technology on QAR data; integration test with KAL main database for the fleet management.

Plan for 2006: integrating processed QAR data into the remote maintenance and training system for the real-time maintenance; on-site field test of fully integrated remote maintenance system.

Long-term supporting efforts:Research in ad-hoc wireless networks to support line maintenance applications. Anticipated milestones include simulation results, publications and technology recommendations.

© PWICE, 2006 3

Page 4: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Long Term Research

Plan for 2006: expanding scope to include training/maintenance scenarios. Stabilizing the high-definition conferencing prototype, interfacing to the wireless portal, software optimization for PC and expanded field tests. Milestones: Next version product, integration and test of technologies.

ConclusionUsing and building upon the results obtained in the first two years of PWICE, collaborative problem solving can be developed and implemented to produce higher quality and more efficient results for airline and OEM support operations. A more intuitive collaboration system that allows for easier access to previous work, and a system that is more visible to participants in multiple organizations should be further evolved. New methods of training personnel in scenario-based problem solving can yield beneficial results in rapid and efficient maintenance and increased customer satisfaction, which will be the focus of next year’s efforts.

© PWICE, 2006 4

Page 5: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

ACKNOWLEDGEMENTS

On behalf of all of the project team members, the authors would like to express special thanks to Pratt and Whitney and Korean Air for their enthusiastic cooperation in developing and testing wireless maintenance portal. The engine maintenance team of Korean Air, led by Y.W. Kim and Y.S Kang, provided field test environments and network facilities. H.M. Kwon, K.B. Cho, K.H. Lim, H.J Kim, S.H. Cho, S.H. Cheon and other many staffs prepared everything perfectly for this field test. Especially for the demonstration, they provided an aircraft on ground. Members of Applied Technologies Group of Pratt and Whitney led by David Loda supported all the technical and administrative supports to achieve the goal of this project. Pratt and Whitney also provided most of field test equipments including Tablet PC’s and wireless LAN equipments installed in Korean Air hangar. For the live field test in September, Pratt and Whitney 24H helpdesk members led by Frank Davis provided full support even in midnight time. Without this help from Pratt and Whiney and Korean Air, this project could not be successful as described in this report.

The authors would also like to convey the warmest appreciation and respect of all the team members to Chairman Y. H. Cho for his sponsorship and support, without whose vision the demonstration of this project would not have been possible. We would also like to thank Pratt & Whitney management, including Robert Keady and David Brantner, the Pratt and Whitney Institute, President Seoung-Yong Hong and Inha University, and Provost Max C.L. Nikias and the University of Southern California for their continued support.

© PWICE, 2006 5

Page 6: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Table of Contents

1. Introduction...................................................................................................7 1.1. Plan for 2005............................................................................................62. Remote Collaborative Systems (RCS) in Support of Collaborative Environments.............7 2.1. Goals and Relations to PWICE Objectives........................................................9 2.2. Technical Approach....................................................................................9 2.2.1. High Definition Point-to-Point Streaming.........................................................9 2.2.2. HD Video Rendering..................................................................................9 2.2.3. HD Software Decoding Performance.............................................................103. Characterization of the KAL Network Environment................................................124. Equipment Setup and Test................................................................................135. Retransmission-based Packet Recovery...............................................................13 5.1. Experiments between ICN and GMP.............................................................18 5.2. Experiments between GMP and PUS.............................................................20 5.3. Experimental Results between Inha University and USC....................................23 5.4. Discussion.............................................................................................24 5.5. Wireless HD Streaming.............................................................................24 5.6. Appendix: Iperf statistics of the connection between GMP and ICN......................25 5.7. Conclusions and Future Extensions...............................................................29 5.8. Publications............................................................................................306. Pratt Whitney 24 Hr Help Desk Needs Assessment.................................................317. Aircraft Data Rapid Acquisition System..............................................................33

© PWICE, 2006 6

Page 7: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

1. Introduction

USC Viterbi School of Engineering, through its Integrated Media Systems Center, is engaged in advanced research and prototyping of scalable immersive environments (SIEs). An instantiation of SIE is the design of two-way conference systems that aims at supporting high quality connectivity over shared wired and wireless networks.

1.1 Plan for 2005

At the 2005 PWICE board meeting, it was proposed and approved to pursue research and prototyping efforts with an overarching goal of creating capabilities for providing remote training/education and problem solving applications for KAL and Pratt & Whitney. As a part of the collaborative mission of the institute, one of the early steps for the first year was to engage with KAL colleagues in the technical design and specifications phase of the remote communication system research and development. This was intended to help guide research toward developing capabilities that can address such needs.

To bring some specific conferencing requirements into R&D design, the PWICE team had a meeting in Korea at KAL with several technical team members (April 2005). At this meeting, as a part of the overall project, certain target milestones were decided upon to test some of the remote conferencing capabilities within the KAL environment as well. The goal here was to study the technical and usability needs of a conferencing system within a commercial environment and inform SIE research.

To pursue this aspect of the project goal of testing and driving IMSC’s SIE research, due to the secure nature of a commercial environment, as well as the physical distance constraints between LA and Korea, it required collaboration between USC and KAL and INHA colleagues.

Milestone 1: USC-INHA system installed and operational: end of July 2005.

Milestone 2: Testing of the network between Incheon and KAL HQ: end of July 2005. This effort was to provide USC researchers specifications about the nature of the network to inform the specific design

Milestone 3: Incheon – KAL HQ system installed and operational: end of September 2005

Milestone 4: Wireless high-definition streaming tests conducted at KAL HQ: end of November 2005. This is contingent upon suitable wireless equipment (access point) being available at KAL HQ.

Some initial testing of the remote conferencing software commenced in September and early October at KAL. USC SIE technology was used. Equipment requirements were also provided.

© PWICE, 2006 7

Page 8: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

In the meanwhile, KAL established its own timeline for this software deployment. It was expected for the remote conferencing systems to be installed and operational in the KAL test-bed environment by the end of 2005.

Milestone (KAL) : Three-way remote conferencing system at Incheon, KAL HQ, and Kimhae (or Pusan) installed and operational: end of December 2005

Due to technical difficulty of measuring the network performance of KAL test-bed and mis-communication among USC, KAL and INHA, the milestone 2 and 3 were delayed until, but were reached by, the end of 2005.

In November, the USC team began the analysis of the results and feedback from the KAL tests and considered them for their continued research on the remote conferencing software. This document describes the activities for achieving the milestone 2 and 3 and a partial fulfillment of new milestone requested from the KAL.

Audio acquisition is also another biggest challenge of the remote conferencing system. Multi-channel echo cancellation is a largely open research problem. IMSC researchers are pursuing both near term and long-term solutions to address the needs of high-quality audio acquisition challenges in conference type environments. Echo cancellation for a single audio channel has been identified as a needed component and work is on-going in that direction. Optimal microphone speaker placements are another design issue. However, deployment of multi-channel echo cancellation was not included in the 2005 project plan because much research needs to be accomplished.

The organization of the following sections is as follows. Section 2 describes our technical approach to HD interactive media streaming. Section 3 describes the KAL network environment and its challenges. In Section 4, a customized retransmission based packet recovery algorithm, its validation model, and simulation results in accordance with the KAL network statistics are presented. Section 5 details the experimental observation results of the network links between Incheon and KAL HQ at Kimpo and KAL HQ and Kimhae (Pusan). Finally, Section 6 summarizes and gives directions for future our plans.

© PWICE, 2006 8

Page 9: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

2. Remote Conferencing System (RCS) in Support of High Resolution Collaborative Environments

Integrated Media Systems CenterUniversity of Southern California Viterbi School of Engineering

High quality, interactive collaboration tools increasingly allow remote participants to engage in problem solving scenarios resulting in a dramatically quicker decision making processes. With high resolution displays becoming increasingly common and significant network bandwidth available, high quality video streaming has become feasible and novel, innovative applications possible. However, the majority of the existing systems supporting high definition (HD) quality streaming are based on offline content, and use elaborate buffering techniques that introduce long latencies. Therefore, these solutions are ill-equipped for interactive real-time applications. Furthermore, due to the massive amount of data required for the transmission of such streams, simultaneously achieving low latency while maintaining a low bandwidth are contradictory requirements. Our HYDRA project (High-performance Data Recording Architecture) [1,2] focuses on the acquisition, transmission, storage and rendering of high resolution media such as HD quality video and multiple channels of audio. HYDRA consists of multiple components to achieve its overall functionality and it enables media streaming across an IP based network with commodity equipment. We have successfully demonstrated a prototype of our system between Inha University in Korea and the University of Southern California in Los Angeles.

2.1 Goals and Relations to PWICE Objectives

The PWICE vision of technologically advancing the maintenance procedures between its customers and the P&W help desk focuses on multiple individual components. The goal of the system is to facilitate and speed-up collaborative maintenance procedures between Pratt & Whitney’s help desk and the Korean Air technical personnel working on aircraft maintenance through the following means.

a) Use high fidelity digital audio and high-definition video technology to deliver a high-presence experience and allow several people in different physical locations to collaborate in a natural way to, for example, discuss a customer request.

b) Provide multi-point connectivity that allows participants to interact with each other from three or more physically distinct locations.

c) Design and investigate acquisition and rendering components in support of the above application to optimize bandwidth usage and provide high quality service over the existing and future networking infrastructure.

2.2 Technical Approach

2.2.1 High Definition Point-to-Point Streaming

Figure 1 illustrates the block diagram of the HYDRA high definition live streaming system. High resolution cameras capture the video that is then sent over an IP network to the receiver. Audio can be transmitted either by connecting microphones to the camera and multiplexing the data with the video stream, or by sending the sound as a separate stream. The transmission subsystem

© PWICE, 2006 9

Page 10: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

uses the RTP protocol and implements selective retransmissions for packet loss recovery. The streams are decoded at the receiver side to render the video and audio.

Our current implementation includes a camera interface that acquires digital video from a JVC JY-HD10U camera via FireWire (IEEE 1394) in HDV format (1280x720 pixels at 30 frames per second). The MPEG-2 data produced by the camera is encapsulated on an IEEE 1394 isochronous channel according to the IEC61883-4 “Digital Interface for Consumer Audio/Video Equipment” standard and must be extracted in real-time. The resulting MPEG transport stream is packetized and can then be transmitted at approximately 20 Mb/s over traditional IP networks such as the Internet. At the client side, the received data stream is displayed through either a software or hardware decoder.

Figure 1: High definition streaming block diagram.

The system uses a single retransmission algorithm [5] to recover lost packets. Buffering is kept to a minimum to maintain a low transmission and rendering latency. Our design can be extended to support multiple simultaneous video streams along with multi-channel sound (i.e., immersive 10.2 channel audio). The issues related to the synchronization of such streams are currently being investigated. The system also integrates with the HYDRA recording system, which focuses on recording of events that produce a multitude of high bandwidth streams.

2.2.2 HD Video Rendering

The JVC JY-HD10U camcorder includes a built-in MPEG-based codec capable of both encoding and decoding approximately one megapixel (i.e., 1280x720) images at a rate of 30 frames per second. The compressed data rate is approximately 20 Mb/s and hence can be stored onto a DV tape in real-time. This format is called HDV (www.hdv-info.org) and a number of manufacturers have announced their support for it.

The decoding functionality of the JY-HD10U electronics is used to display recorded tapes on the camera LCD viewfinder and also to produce the analog Y-Pb-Pr signals that allow the connection of an HDTV directly to the camcorder. The digital data stream from the camera is only available in compressed form on the built-in FireWire port. As a consequence, if the media stream is transmitted over a network, the rendering component requires a MPEG-2 HD decoder.

© PWICE, 2006 10

Page 11: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Various hardware and software options for decoding of streams are considered to achieve the best quality video with minimal latency. HYDRA currently employs the following two solutions:

1. Hardware-based: When improved quality and picture stability are of paramount importance we use the CineCast HD decoding board from Vela Research. An interesting technical aspect of this card is that it communicates with the host computer through the SCSI (Small Computer Systems Interface) protocol. We have written our own Linux device driver as an extension of the generic Linux SCSI support to communicate with this unit. An advantage of this solution is that it provides a digital HD-SDI (uncompressed) output for very high picture quality and a genlock input for external synchronization.

2. Software-based: We use the libmpeg2 library – a highly optimized rendering code that provides hardware-assisted MPEG decoding on current generation graphics adapters. Through the XvMC extensions of Linux’ X11 graphical user interface, libmpeg2 utilizes the motion compensation and iDCT hardware capabilities on modern graphics GPUs (e.g., Nvidia). This is a very cost effective solution. For example, we use a graphics card based on an Nvidia FX 5200 GPU that can be obtained for less than $100. In terms of performance this setup achieves approximately 70 fps @ 1280x720 with a 3 GHz Pentium 4. Figure 2 shows the fan-less graphics card in our Shuttle XPC computer.

Figure 2: MPEG-2 decoding is achieved with a Nvidia FX 5200 based graphics card. The Linux supplied drivers accelerate the iDCT and motion compensation

steps required for MPEG-2 decoding.

2.2.3 HD Software Decoding Performance

Table 1 illustrates the measurements that we performed with the software decoder based on the libmpeg2 library. Two subalgorithms in the MPEG decoding process ─ motion compensation (MC) and inverse discreet cosine transform (iDCT) ─ can be performed either in software on the host CPU (labeled SW in Table 1) or on the graphics processing unit (GPU, labeled HW in Table 1). The tests were performed on a dual Xeon 2.6 GHz processor Hewlett-Packard xw6000 workstation with an NVIDIA Quadro NVS 280 AGP graphics accelerator. The grey fields indicate real time (and better) performance. As can be seen from the results, real time decoding is possible with hardware assist.

© PWICE, 2006 11

Page 12: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Video Format ATSC 1080i ATSC 720p HDV 720pFrame resolution 1920 x 1080 1280 x 720 1280 x 720Frames per second 30 (60 interlaced) 60 progressive 30 progressiveCompressed bandwidth

40 Mb/s 40 Mb/s 20 Mb/s

Rendering parameters: SW MC & iDCT 17.90 fps 30.37 fps 31.35 fps HW MC & iDCT 33.48 fps 63.28 fps 67.76 fps

Table 1: Software decoding of MPEG compressed HD video material with the fast libmpeg2 library. Two subalgorithms in the MPEG decoding process ─ motion compensation (MC) and inverse discreet cosine transform (iDCT) ─ can be performed either in software on the host CPU (SW) or on the graphics processing unit (GPU; HW). Grey fields indicate real time or better performance.

3. Characterization of KAL Network Environment

Figure 3 shows the logical topology of the KAL network infrastructure. As mentioned in the KAL milestone, the 2005 PWICE project plan is to enable three-way video conferencing system among these three sites (ICN, GMP, PUS). ICN represents the operation unit at Incheon Airport; GMP, the main operation center at Kimpo in Seoul; and PUS, another operation unit at Kimhae in the southern Pusan area.

ICNGMP

PUS

ISP (KT)ISP (KT)

50Mbps50Mbps

50Mbps50Mbps

u

2Mbps2Mbps

ICN: IncheonGMP: Kimpo (Seoul)PUS: Kimhae (Pusan)

Dedicated LinkPublic Link

10.222.6.222

10.55.33.222

210.105.6.4

210.113.150.1110.71.31.222

v

Figure 3: Logical connectivity of KAL test-bed1.

Path is a dedicated network whose backbone network is all connected with gigabit switches and routers. Every end host of both sites is connected to its dedicated 100Mbits/sec switch (or

1 The physical connectivity between ICN and PUS is unknown as yet.

© PWICE, 2006 12

Page 13: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

hub), and thus maximum capacity of end-to-end host is limited by 100Mbits/sec. If an end-host were to be connected to a gigabit switch, much higher available bandwidth between two end hosts, larger than 100Mbits/sec, would be expected.

There are two different networks between GMP and PUS: one dedicated network and one shared network via public Internet access. The dedicated network whose capacity is 2Mbits/sec is preserved for specific purposes, mainly for the KAL daily operations. The alternative path whose incoming/outgoing bandwidth is 50Mbits/sec is connected to a Korea Telecom (KT) KORNET backbone network. According to an internal source at KT from an ISP company hosting the KAL Internet access, the KORNET backbone network is designed to be overprovisioned for the designated network traffic by a factor of two, and it has never reached even 50% of the network utilization since its deployment. For the PWICE project, we use paths and . Before establishing the final setup for three-way communication among ICN, GMP, and PUS, we will need to measure the available bandwidths for the path and the path .

4. Equipment Setup and Test

Figure 4: HYDRA-based remote conferencing system local test setup at Kimpo.

Figure 4 shows the test setup and the hardware used for testing at KAL. The computers and cameras (Sony HDR-HC1 HDV models) were purchased in Korea. The computers were shipped to USC on December 2. At the IMSC labs the RCS software was installed on three computers. The units were then tested with our local HDV cameras (JVC JY-HD10U) and on our local USC network. After successful tests of all software and hardware, the computers were shipped back to KAL on December 11. Shortly thereafter, Mr. Beomjoo Seo from IMSC/USC undertook a trip to KAL for the local installation and tests between Incheon, Kimpo and Pusan. Details of those activities are described in the next section.

5. Retransmission-based Packet Recovery

According to the observations made through initial experiments in September and early October at KAL, the path , although connected to a fully dedicated gigabit backbone network, was

© PWICE, 2006 13

Page 14: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

identified as a highly lossy link. The experiments reported that a client experienced rather tolerable but frequent video glitches with one-way video streaming. Such glitches are usually attributable to frequent packet drops on the underlying network path. When bi-directional video streams were injected into the path, the user experienced completely distorted picture quality at both ends, due to lower available bandwidth than was required.

The remote conferencing software delivered by the USC team was designed to support a packet recovery mechanism. However, the recovery method was disabled during the initial experimental period. Moreover, since it was originally designed for general streaming applications, it turned out to be less efficient for highly delay-constrained (i.e., low latency) streaming applications such as video conferencing. In response to these new observations, the USC team redesigned and upgraded the original recovery mechanism.

To characterize the unexpected network behavior, KAL measured the network statistics of the path in the middle of November, using a network traffic measurement tool called an iperf. The statistics reported by this tool include the packet loss rate, the one-way delay, the number of out-of-order packets, and the available bandwidth. Figure 5 shows the packet loss rate of a one-way stream observed over 29 hours between Kimpo and Incheon. The loss rate ranged from 0.01% to 1.4%. Such a high percentage of the packet loss rate, unusual in a dedicated network, indicated that some network components along the path might be a bottleneck. The one-way delay was reported to be far less than 1 millisecond, as expected. The one-way available bandwidth, around 40Mbits/sec, was consistent and stable over time. Unfortunately, the measurement statistics for bi-directional streaming were not collected during the measurement period.

Data Loss

0.00%

0.20%

0.40%

0.60%

0.80%

1.00%

1.20%

1.40%

1.60%

10:47:4212:21:0213:54:2315:27:4317:01:0318:34:2320:07:4321:41:0423:14:240:47:442:21:043:54:245:27:447:01:048:34:2410:07:4511:41:0513:14:2514:47:45

Data Loss

Figure 5: End-to-end packet loss rates of the network path between Kimpo (GMP) and Incheon (ICN) observed during November 15-16, 2005.

Under the assumption that every network component on the path of GMP and ICN were configured correctly, the USC team characterizes the network link with 40Mbits/sec of available bandwidth, negligible one-way delay/out-of-order packets, and 1.4% of packet loss rate. These parameters were used to guide our design of the new packet recovery algorithm.

© PWICE, 2006 14

Page 15: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

The new packet recovery algorithm has the following features: Re-use of the existing retransmission based recovery solution [2, 5] Reduction of the response time of a retransmission request

There are many alternative solutions to recover lost packets. One popular solution is to use redundant packets such as a Forward Error Correction (FEC) enabled coding scheme. This approach removes the delay introduced by a retransmission request while it over-utilizes the network bandwidth and uses more than the minimally required bandwidth. It also puts additional strain on the CPU utilization. We found that our retransmission approach was an excellent solution in the KAL environment.

Simulation Setup

Sender Receiver

DelayModel

PacketLoss

Model

Retransmission request

Original data+ (retransmitted data)

Figure 6: Simulation model of retransmission based packet recovery algorithm.

We validated our new retransmission scheme in a loss-free network environment extensively. To emulate a loss-prone network, we included a probabilistic packet loss model and a deterministic delay model at the receiver side as shown in Figure 6. The packet loss model drops incoming packets probabilistically before delivering them to the receiver application session. The receiver application detects missing packets by examining the sequence number in the data packet header. If it finds any missing packets, it immediately issues a retransmission request to the sender; the delay model postpones the delivery of the retransmission requests by a given amount of time.

© PWICE, 2006 15

Page 16: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

(a) 10% loss without retransmission (b) 10% loss with retransmission

(c) 5% loss without retransmission (d) 5% loss with retransmission

(e) 1% loss without retransmission (f) 1% loss with retransmission

Figure 7: Effect of retransmission based packet recovery algorithm. Left figures (a), (c), and (e) show the picture quality without any packet recovery while right figures (b), (d), and (f) depict the quality with retransmission based packet recovery.

We used a 2-state Markov model, known as the Gilbert Model to emulate the bursty packet loss behavior of real-world networks [3]. The Gilbert Model is well known and extensively used in network research.

© PWICE, 2006 16

Page 17: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Simulation Results

We fixed the delay parameter at 10ms, which was chosen to represent the maximum round trip delay within Korea. We tested three different packet loss rates -- 1%, 5% and 10%. To recover lost packets, our mechanism was configured to employ a single retransmission request.

Figure 7 illustrates the visual results of our tests. The video screen shots reveal that our software recovers many of the lost packets and maintains a tolerable picture quality with a few, noticeable glitches even in extremely loss-prone environments – 10% of packet loss rate – as long as the network can utilize more than required available bandwidth (right column of the figures). As seen in the Figures 7(f), our retransmission scheme recovers lost packets very successfully in a 1%-packet-loss environment. The picture quality with our retransmission scheme in a 2%-packet-loss environment, not depicted in this Figure, also showed a similar tendency as the 1%-packet-loss environment.

Activities during December 21-26, 2005 at KAL in Korea

After the test of newly designed retransmission method, the USC team visited Korea during the late December and tested the streaming capability of the path between GMP and ICN and the path between GMP and PUS. Section 5.1 summarizes the results of the experiments performed on the link between GMP and ICN. Section 5.2 reports new challenges on the link between GMP and PUS.

Each subsection presents the measured available bandwidth, the observed packet loss rate, and the impression of the experienced streaming quality. To measure the available bandwidth of two end-hosts, we used iperf 2.0.2 which was configured to inject bi-directional network traffic simultaneously.

The packet loss rates were computed from our application trace files. The application software was logging lost packets only when missing packets were detected during the observation of the sequence number in the data packet header. If too many missing packets were found in consecutive packets, we intentionally disabled the retransmission mechanism temporarily to not hurt the current network conditions. For easier processing and counting of the number of lost packets, we disregarded all late packets. Throughout the logging statistics, such late packets were rarely observed. Thus, the overall performance statistics can be treated as reflecting the real network congestion state.

The streaming quality describes the author’s subjective impression of the overall video quality experienced during every experimental run. 5.1. Experiments between ICN and GMP

The following experiments were performed on December 22, 2005.

© PWICE, 2006 17

Page 18: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Available Bandwidth

As shown in Figure 8, the available bandwidth of the end-to-end link from ICN to GMP was estimated to guarantee 88 Mbits/sec of network throughput. The bandwidth in the opposite direction from GMP to ICN was evaluated at around 46 Mbits/sec. During the measurements, the utilization of the gigabit backbone network was reported to be less than 1%.

Compared with the KAL statistics depicted in Figure 5, the newly established measurement results comply with our intuition. In fact, the KAL statistics collected during the middle of November was misled by an incorrect router configuration in the middle of the network path between ICN and GMP. One of the network links was set up as a simplex not a full duplex.

Figure 8: Measured available bandwidth of GMP and ICN.

Packet Loss Rate

Earlier KAL measurement statistics reported 1.4% packet loss rate. This was also misled by a non-optimal link interface setup. The new measurements did not report any packet losses during the bi-directional streaming. Moreover, disabling the retransmission based packet recovery did not cause any noticeable video glitches.

© PWICE, 2006 18

Page 19: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Streaming Quality

As seen in Figure 9, we did not experience any noticeable video glitches during the remote conferencing from ICN to GMP. The opposite connection from GMP to ICN also showed the same streaming quality.

Figure 9: Picture quality of streaming connection from ICN to GMP.

5.2. Experiments between GMP and PUS

Unlike the link between GMP and ICN, the network characterization of the link between GMP and PUS is a very challenging task. To the best of our knowledge, no live HD streaming application across the public Internet has been acknowledged so far.

Due to too many potential factors causing network congestion, we narrowed the measurement scope for our experiments. The following two network parameters are closely related with network performance. Especially, the network kernel buffer sizes at the client and server are the most crucial factors that affect the network throughput significantly.

Effect of network kernel buffer size We varied the network kernel buffer size from 4MB to 32MB. The current default TCP/IP setting available on modern Operating Systems is not designed to support a large amount of data transmissions, such as for HD streaming, so special care of the appropriate kernel buffer size is mandated to enable bursty data transmissions. As proposed in [4], the minimum window size for HD streaming should be configured to be at least 512KB of the network kernel buffer size. However, even a 512KB of kernel buffer size may suffer from unexpected glitches and small stretches during the streaming. Thus, a more realistic number is chosen manually through trials and errors during the preliminary experimental period. Our measurements show that 512KB is such a marginal

© PWICE, 2006 19

Page 20: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

number that we usually overprovisioned the kernel buffer with values ranging from 4MB to 32MB. Note that a larger kernel buffer size may result in a longer latency. Through a variety of experiments in different networks, we concluded that more than 4MB of the network kernel buffer size, optimal for the HD streaming, does not affect overall latency to the application. Unfortunately, even various values of the network kernel buffer size did not improve the network throughput on path . This symptom implied that path already over-utilized the available bandwidth resources.

Effect of UDP packet size We tested two types of UDP packet sizes -- 564bytes and 940bytes. A larger packet size results in an efficient data delivery by reducing the number of network interrupts. Using more than 1472 bytes in a UDP packet does not produce any performance gain because more than 1500 bytes are segmented into smaller fragments by the network stack. In real applications, a packet size of more than 1024 bytes may also be segmented, depending on the TCP/IP implementation details. Thus, our design prefers not to use more than 1024 bytes. As a result, our system, aware of the MPEG-2 Transport Stream format, uses multiples of 188 bytes (a transport stream packet unit) as the base size of UDP packets. Figure 10 shows a slight improvement of larger packet size (940bytes) over smaller packet size (564 bytes). However, there is little contribution to the current network conditions of the path .

The following experiments were performed on December 26, 2005.

Available Bandwidth

Figure 10: Measured available bandwidth of GMP and PUS

© PWICE, 2006 20

Page 21: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

During the experiments, the average network utilization of the path was repeatedly reported to be less than 20% of its full capacity, 10Mbits/sec of cross traffic between GMP and PUS. From this observation, we expected to accomplish one live HD stream whose required bandwidth is around 25Mbits/sec, which is never expected to be a network bottleneck. Figure 10, however, showed much lower available bandwidth along the path: 6 Mbits/sec of available bandwidth from PUS to GMP and 1 Mbits/sec of delivery rate from GMP to PUS. These observation results were continuously seen until the end of the experiments.

It also depicted another interesting finding, similar to the tendency observed in Figure 8. Any outgoing bandwidth starting from GMP has a lower available bandwidth than the incoming path. This finding implies that there are a lot of outgoing network congestions around GMP because the GMP operation center produces a comparatively heavier data traffic than any of the other sites. Packet Loss Rate

The path suffers from too many retransmission requests due to too many packet drops when a packet travels on the public Internet. Without any retransmission policy, the RCS application software continued to observe too many packet drops.

(a) one-way loss rate (b) two-way loss rate

Figure 11: Packet loss rate observed during the HD streaming between GMP and PUS.

Figure 11 shows the packet loss rates of path . The higher available bandwidth link, the link from PUS to GMP, reports a smaller packet loss rate than the lower available bandwidth link, while its converged packet loss rate is still too high to support any HD streaming.

Streaming Quality

© PWICE, 2006 21

Page 22: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

Figure 12: Picture quality of streaming connection from GMP to PUS.

As shown in Figure 12, more than 60% of packet loss rate resulted in an unrecognizable video.

5.3. Experimental Results between Inha University and USC

5.3.1. Early Results in 2004

On 8 October 2004, an integrated prototype combining the latest HYDRA streaming components was installed at the IMSC laboratories and the Inha Memorial Library. Subsequently, a trans-pacific high definition audio/video two-way interactive and live teleconferencing session was conducted between Inha University and IMSC. Our low latency real-time streaming technology enabled high resolution, life-sized video displays and natural interaction at both locations. The researchers tested the features of the current system and discussed future extensions over the common link. The results of this latter effort are shown in the Figure 13 below.

Figure 13: HYDRA-based video teleconferencing system showing participants in Korea (Yoo-Sung Kim, Kyung Sup Kwak and Roger Zimmermann at Inha Memorial Library) and

USA (Moses Pawar, Beomjoo Seo and Min Qin in the IMSC laboratory, from left to right)

© PWICE, 2006 22

Page 23: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

5.3.2. Results in 2005

Figure 14: HYDRA-based video teleconferencing system showing participants in Korea (Se-Geun Park, Jaehyun Park and Roger Zimmermann at Inha Memorial Library) and USA

(Beomjoo Seo in the IMSC laboratory, from left to right)

5.4. Discussion

From the experiments on the network path between ICN and GMP, we observed that mis-configuration of any network component on the path might cause a variety of singular network behaviors that cannot be intuitively understood. After the fixation, the network would then behave as many experts expected.

For the same reason, the experiments suffering from a huge amount of packet drops in the public Internet occurred between GMP and PUS would then be closely related with some other unknown causes. From the internal source from an ISP company, no abnormal packet drops we have observed in our experiments were reported.

By the date of this document writing, KAL reported the most probable cause of significant packet drops and lower available bandwidth than expected on the path , use of 10Mbps hub instead of 100 Mbps switching hub [6]. This cause explains by itself on why available bandwidth never exceeded 10Mbits/sec and why too many packet drops occurred in the middle. Thus, we expect to have enough available bandwidth connection from PUS to GMP. But we still doubt that whether the network link from GMP to PUS can allocate enough available bandwidth for the live HD streaming. Although so, we still need bi-directional two HD streaming capability on the path to support three-way conferencing.

5.5. Wireless HD Streaming

On July 28, 2005, we demonstrated for the first time the HYDRA HD video live streaming system over wireless ad-hoc networks at the WCA 2005 event (Wireless Communications Association), in Washington DC, at the Marriott Wardman Park Hotel. We have experimented with both wireless infrastructure mode (requires an access point) and ad-hoc mode (no access

© PWICE, 2006 23

Page 24: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

point). Figure 15 shows the demonstration setup for our wireless ad-hoc live streaming. The stream is directly transmitted between two laptops with 802.11a wireless network interface cards.

Figure 15: HYDRA HD live video streaming system through an 802.11a wireless ad-hoc network. An HD camera is connected to the laptop on the left, and the high quality video is transmitted to the right laptop through 802.11a ad-hoc mode. No access point is involved.

© PWICE, 2006 24

Page 25: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

5.6. Appendix

Iperf statistics of the connection between GMP and ICN

Script started on Thu 22 Dec 2005 01:34:41 PM KST

[yima@miedemo2 src]./iperf -c 10.222.6.222 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 10.222.6.222, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.55.33.222 port 55745 connected with 10.222.6.222 port 33445[ 5] local 10.55.33.222 port 33445 connected with 10.222.6.222 port 34708[ 4] 0.0-10.0 sec 106 MBytes 88.8 Mbits/sec[ 5] 0.0-10.0 sec 60.7 MBytes 50.9 Mbits/sec[yima@miedemo2 src]$ ./iperf -c 10.222.6.222 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 10.222.6.222, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.55.33.222 port 55747 connected with 10.222.6.222 port 33445[ 5] local 10.55.33.222 port 33445 connected with 10.222.6.222 port 34709[ 4] 0.0-10.0 sec 106 MBytes 89.0 Mbits/sec[ 5] 0.0-10.0 sec 56.8 MBytes 47.6 Mbits/sec[yima@miedemo2 src]$ ./iperf -c 10.222.6.222 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 10.222.6.222, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.55.33.222 port 55748 connected with 10.222.6.222 port 33445[ 5] local 10.55.33.222 port 33445 connected with 10.222.6.222 port 34710[ 4] 0.0-10.0 sec 105 MBytes 88.0 Mbits/sec[ 5] 0.0-10.0 sec 55.1 MBytes 46.2 Mbits/sec[yima@miedemo2 src]$ ./iperf -c 10.222.6.222 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 10.222.6.222, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.55.33.222 port 55749 connected with 10.222.6.222 port 33445[ 5] local 10.55.33.222 port 33445 connected with 10.222.6.222 port 34711[ 4] 0.0-10.0 sec 105 MBytes 88.3 Mbits/sec[ 5] 0.0-10.0 sec 59.6 MBytes 50.0 Mbits/sec[yima@miedemo2 src]$ exitexit

Script done on Thu 22 Dec 2005 01:36:23 PM KST

Iperf statistics of the connection between GMP and PUS

Script started on Mon 26 Dec 2005 02:34:00 PM KST

© PWICE, 2006 25

Page 26: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50840 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38400[ 4] 0.0-10.0 sec 7.27 MBytes 6.09 Mbits/sec[ 5] 0.0-10.1 sec 1.36 MBytes 1.13 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50841 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38401[ 4] 0.0-10.1 sec 7.46 MBytes 6.20 Mbits/sec[ 5] 0.0-10.1 sec 1.26 MBytes 1.04 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50842 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38418[ 4] 0.0-10.1 sec 7.48 MBytes 6.23 Mbits/sec[ 5] 0.0-10.2 sec 1.24 MBytes 1.02 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50843 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38431[ 4] 0.0-10.1 sec 7.39 MBytes 6.15 Mbits/sec[ 5] 0.0-10.1 sec 1.32 MBytes 1.09 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50844 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38432[ 4] 0.0-10.1 sec 7.33 MBytes 6.08 Mbits/sec[ 5] 0.0-10.1 sec 1.23 MBytes 1.01 Mbits/sec

© PWICE, 2006 26

Page 27: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50845 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38505[ 4] 0.0-10.1 sec 7.39 MBytes 6.16 Mbits/sec[ 5] 0.0-10.1 sec 1.21 MBytes 1.01 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50846 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38512[ 4] 0.0-10.0 sec 6.71 MBytes 5.62 Mbits/sec[ 5] 0.0-10.1 sec 1.74 MBytes 1.45 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 50847 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38565[ 4] 0.0-10.1 sec 7.09 MBytes 5.91 Mbits/sec[ 5] 0.0-10.1 sec 1.31 MBytes 1.09 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 52870 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 38566[ 4] 0.0-10.0 sec 7.36 MBytes 6.16 Mbits/sec[ 5] 0.0-10.1 sec 1.29 MBytes 1.07 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 52871 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 51319[ 4] 0.0-10.1 sec 6.56 MBytes 5.45 Mbits/sec[ 5] 0.0-10.1 sec 1.55 MBytes 1.28 Mbits/sec

© PWICE, 2006 27

Page 28: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 52872 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 51320[ 4] 0.0-10.0 sec 6.80 MBytes 5.69 Mbits/sec[ 5] 0.0-10.1 sec 1.54 MBytes 1.28 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 52873 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 51321[ 4] 0.0-10.1 sec 7.34 MBytes 6.09 Mbits/sec[ 5] 0.0-10.1 sec 1.41 MBytes 1.16 Mbits/sec

[yima@wongyima src]$ ./iperf -c 210.105.6.4 -p 33445 -d------------------------------------------------------------Server listening on TCP port 33445TCP window size: 85.3 KByte (default)------------------------------------------------------------------------------------------------------------------------Client connecting to 210.105.6.4, TCP port 33445TCP window size: 16.0 KByte (default)------------------------------------------------------------[ 4] local 10.71.31.222 port 52874 connected with 210.105.6.4 port 33445[ 5] local 10.71.31.222 port 33445 connected with 210.105.6.4 port 51326[ 4] 0.0-10.0 sec 6.93 MBytes 5.79 Mbits/sec[ 5] 0.0-10.1 sec 1.37 MBytes 1.14 Mbits/sec

[yima@wongyima src]$ exit

Script run on Mon 26 Dec 2005 02:37:52 PM KST

© PWICE, 2006 28

Page 29: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

5.7. Conclusions and Future Extensions

We have designed and implemented a live streaming framework specifically targeting very high resolution, multi-channel video and audio transmissions to enable activities in immersive environments. Initial tests show the feasibility of our approach. We plan to continue working on the technical aspects of our system, specifically targeting (1) the reduction of the latency among the participants, and (2) the extension to three- and n-way collaborative activities that include asymmetric communications, and (3) the integration of the live streaming components with our high-speed immersive media stream recorder prototype that will allow recording, archiving and playback of multiple streams of different media types. Up-to-date information can be found on our home page at http://dmrl.usc.edu/hydra.html.

© PWICE, 2006 29

Page 30: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

5.8. Publications

[1] Roger Zimmermann, Kun Fu and Dwipal A. Desai. HYDRA: High-performance Data Recording Architecture for Streaming Media. Book chapter in Video Data Management and Information Retrieval, editor Sagarmay Deb, University of Southern Queensland, Toowoomba, QLD 4350, Australia. Published by Idea Group Inc., publisher of the Idea Group Publishing, Information Science Publishing and IRM Press imprints, 2004. ISBN 1-59140-571-8.

[2] Roger Zimmermann, Moses Pawar, Dwipal A. Desai, Min Qin, and Hong Zhu. High Resolution Live Streaming with the HYDRA Architecture. Published in the ACM Computers in Entertainment journal, Vol. 2, Issue 4, Oct./Dec. 2004.

[3] Roger Zimmermann, Chris Kyriakakis, Cyrus Shahabi, Christos Papadopoulos, Alexander A. Sawchuk and Ulrich Neumann. The Remote Media Immersion System., IEEE MultiMedia, vol. 11, no. 2, pp. 48-57, April-June 2004.

[4] Alexander A. Sawchuk, Elaine Chew, Roger Zimmermann, Christos Papadopoulos, Chris Kyriakakis. From Remote Media Immersion to Distributed Immersive Performance.ACM SIGMM 2003 Workshop on Experiential Telepresence (ETP 2003), November 7, 2003, Berkeley, CA.

[5] C. Papadopoulos and G. M. Parulkar, Retransmission-Based Error Control for Continuous Media Applications. Proceedings of NOSSDAV, 1996.

[6] Ajay Tirumlala, Feng Qin, Jon Dugan, Jim Ferguson, and Kevin Gibbs, Iperf Version 2.0.2, May 3, 2005, URL: http://dast.nlanr.net/Projects/Iperf/

[7] Roger Zimmermann, Kun Fu, Nitin Nahata, and Cyrus Shahabi, Retransmission-Based Error Control in a Many-to-Many Client-Server Environment, SPIE Conference on Multimedia Computing and Networking 2003 (MMCN 2003), Santa Clara, California, January 29-31, 2003

[8] W. Jiang and H. Schulzrinne, Modeling of Packet Losses and Delay and Their Effect on Real-Time Multimedia Service Quality, In The 10th International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV 2000), June, 2000.

[9] Stanislav Shalunov, TCP over WAN Performance Tuning and Troubleshooting, http://people.internet2.edu/%7Eshalunov/writing/tcp-perf.html

[10] C. Papadopoulos and G. M. Parulkar, Retransmission-based Error Control for Continuous Media Applications, In Proceedings of the 6th International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV 1996), Zushi, Japan, April 23-26, 1996.

[11] Choong Hee Lee, PWICE Project Report, Email communication, Jan. 30, 2006.

[12] Min Qin and Roger Zimmermann. High Definition Live Streaming. Book chapter in Encyclopedia of Multimedia, editor Borko Furht, published by Springer, 2006.

[13] Roger Zimmermann, Cyrus Shahabi, Kun Fu, and Mehrdad Jahangiri. A Multi-Threshold Online Smoothing Technique for Variable Rate Multimedia Streams. Accepted for publication in the Multimedia Tools and Applications journal, Springer Publishers, vol. 28, no. 2, 2006.

[14] Roger Zimmermann, Cyrus Shahabi, Kun Fu, and Shu-Yuen Didi Yao. Scalability Evaluation of the Yima Streaming Media Architecture. Published in the Software Practice & Experience (SP&E) journal, volume 35, issue 4, pages 345-359, December 2004.

© PWICE, 2006 30

Page 31: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

© PWICE, 2006 31

Page 32: PWICE_ProjectReport_2005-2.doc

Final Report of PWICE Project 2005 Proprietary PWICE, UTC, Korean Air

7. Aircraft Data Rapid Acquisition System

Inha University

© PWICE, 2006 32