59
Bachelor Informatica Determining meaningful metrics for Adaptive Bit-rate Streaming HTTP video delivery Abe Wiersma Student number: 10433120 15th June 2016 Supervisor(s): Dirk Griffioen & Daniel Rom˜ ao Signed: Robert Belleman Informatica — Universiteit van Amsterdam

Determining meaningful metrics for Adaptive Bit-rate ... · Apple HTTP Live Streaming (HLS) is the Apple format of the adaptive bit-rate streaming technique[7]. Apple packs baseline

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • Bachelor Informatica

    Determining meaningful metrics forAdaptive Bit-rate Streaming HTTPvideo delivery

    Abe Wiersma

    Student number: 10433120

    15th June 2016

    Supervisor(s): Dirk Griffioen & Daniel Romão

    Signed: Robert Belleman

    Informatica—

    Universiteit

    vanAmst

    erdam

  • Abstract

    The video on demand industry has become the largest source of Internettraffic in the world, and the struggle to deliver the hosting infrastructureis a headache to system engineers. The capabilities of a server are mostlyguesswork and the amount of clients it can serve are based on nothing morethan the educated guess of the implementing system engineer. This paperaims to conclude what measurements truly matter in determining a server’sreal world capabilities. For this purpose a load testing tool called Tensorwas created on a Flask Backend and Angular Frontend. The tool combinesresource monitoring and load data in D3 graphs. Load data is generatedwith WRK, a HTTP benchmarking tool, requesting high amounts of datafrom a video hosting server. As a test of the tool’s performance two videohosting servers are put side by side, and their capabilities are measured.

  • Contents

    1 Introduction 4

    1.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2 Adaptive Bit-rate Streaming 7

    2.1 Apple HLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2.2 Microsoft HSS . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    2.3 Adobe HDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    2.4 MPEG-DASH . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    2.5 Side by side . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3 Infrastructure 14

    3.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    3.2 Set-ups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.3 Performance measurements . . . . . . . . . . . . . . . . . . . . 17

    4 Tensor 19

    4.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    4.2 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    4.3.1 Backend . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    4.3.2 Performance Co-Pilot . . . . . . . . . . . . . . . . . . . 24

    4.3.3 Frontend . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    2

  • CONTENTS 3

    5 Experiments 275.1 Testing setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    5.1.1 Source . . . . . . . . . . . . . . . . . . . . . . . . . . . 275.1.2 Video Hosting Servers . . . . . . . . . . . . . . . . . . 285.1.3 Video & Software . . . . . . . . . . . . . . . . . . . . . 29

    5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.2.1 Server: usp.abewiersma.nl . . . . . . . . . . . . . . . . 305.2.2 Server: demo.unified-streaming.com . . . . . . . . . . . 32

    6 Conclusion 346.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    Appendices 39

    Appendix A Results usp.abewiersma.nlA.1 HDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A.2 HLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A.3 HSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A.4 DASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    Appendix B Results demo.unfied-streaming.comB.1 HDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .B.2 HLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .B.3 HSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .B.4 DASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    Appendix C Glossary

    3

  • CHAPTER 1

    Introduction

    Cisco recently unveiled a report showing that by 2019, on-line video will beresponsible for 80% of global Internet traffic[1], 72% of the content beingdelivered by Content Delivery Networks. The mode of transport for videocontent used to be progressive download over HTTP, in which a TCP con-nection just transferred a video file to a client as quickly as possible.

    While TCP currently is the most used underlying protocol the origin ofstreaming media lies with the Real-Time Transport Protocol (RTP) overUDP. UDP was used because of TCP’s assumed hurtfulness to video stream-ing performance as a result of throughput variations and potentially largeretransmission delays. As UDP is a lot simpler in relation to TCP thisseemed like the obvious way to go. UDP in contrast to TCP does not offerguaranteed arrival of packets to the endpoint and instead focuses on fasttransmission (Best-effort delivery)[2].

    In practice the disadvantages of media streaming over TCP did not ne-cessarily apply and contrary to HTTP over TCP, RTP struggles with thetraversal of firewalls and with the filtering done at Network Address Transla-tion (NAT) on a router[3]. The rate-adaption which first was done server-sideby push-based RTP started migrating to HTTP over TCP. In HTTP Adapt-ive Bit-rate Streaming (ABS) the rate-adaptation is done client-side. The

    4

  • CHAPTER 1. INTRODUCTION 5

    adoption of rate adaptation gave way to modern Adaptive Bit-rate Stream-ing, with a bit-rate for every user’s specific needs.

    Because HTTP web servers and infrastructure were already there toprovide with HTML content, this base could be extended for use with ABS.Tools for HTTP benchmarking are widely available, e.g. WRK, ApacheBenchand HTTPerf, but none are dedicated enough to benchmark Adaptive Bit-rate Streaming.

    To fill the void left by the lack of benchmarking solutions tailored for ABSthe load testing tool named Tensor was made. Tensor was made on the basisof a literature study into what measurements should be done to describe theperformance of an origin (Adaptive Bit-rate Streaming server). As part ofan experiment the Tensor load testing tool is used to benchmark two originsrunning the Unified Origin1. The servers are hosting the same video con-tent, which is requested over four Adaptive Bit-rate Streaming implement-ations: HTTP Smooth Streaming (Microsoft HSS), HTTP Live Streaming(Apple HLS), HTTP Dynamic Streaming (Adobe HDS) and Dynamic Ad-aptive Streaming over HTTP (MPEG DASH).

    1.1 Related work

    When ABS over RTP was replaced with ABS over HTTP, performance test-ing was done exclusively as a test the implemented algorithm. Papers inwhich the algorithms of HDS, HLS, HSS and since 2012 also DASH are putside by side. “An Experimental Evaluation of Rate-Adaptation Algorithmsin Adaptive Streaming over HTTP”[3] is a paper released before the intro-duction of DASH and tests solution algorithm performance for restrictedresources. A similar paper,“A Comparison of Quality Scheduling in Com-mercial Adaptive HTTP Streaming Solutions on a 3G Network”[4], measuresthe performance of rate adaption algorithms to keep a full buffer in a ‘real’3G environment. Video Benchlab is an open-source tool accompanied withthe paper “Video BenchLab: an open platform for realistic benchmarking ofstreaming media workloads”[5]. The paper first describes the performance ofdifferent browsers streaming the same video to one client. Afterwards the pa-per describes an experiment in which 24 of these clients are run concurrently.

    1http://www.unified-streaming.com/products/unified-origin

    5

    http://www.unified-streaming.com/products/unified-origin

  • 6 CHAPTER 1. INTRODUCTION

    The narrative of this paper however lies in individual client performance andnot server performance. The papers found are examples of client usage ofan Adaptive Bit-rate Streaming implementation, available literature did notprovide with information on the subject of load testing an origin.

    6

  • CHAPTER 2

    Adaptive Bit-rate Streaming

    Adaptive Bit-rate Streaming over HTTP describes a method of deliveringvideo content over the HTTP protocol leaving the logic of rate-adaptationto the clients[3]. Because HTTP already was the standard for web contentdelivery when ABS was introduced, the need for a new protocol for videocontent delivery was negated. The server is tasked with providing multipleprofiles of the same video content encoded for a multitude of resolutions (e.g.480p, 720p, 1080p) with bit-rates approximating these resolutions. Videocontent encoded for equal resolutions, using different settings on an encoder,can result in smaller encoded video size or better encoded video quality. InAdaptive Bit-rate Streaming video content is often encoded in H264, whichhas different profiles with a number of settings for application specific pur-poses.

    The video content is stored on a server as partitioned fragments, or gen-erated just-in-time. These fragments typically have a duration in a rangebetween 2 to 10 seconds[6]. In practice these video fragments are referredto as video segments. These video segments have their references stored ina separate file. Most ABS implementations use an XML format, but Applefor example stores them in a ‘traditional’ play-list file.

    The client is tasked with the retrieval of video and meta data from servers

    7

  • 8 CHAPTER 2. ADAPTIVE BIT-RATE STREAMING

    and applies the logic of a rate adaptation algorithm. An ABS stream typ-ically starts with a streaming client requesting the meta-data that describesthe different bit-rate options for the requested video-stream. Fragments oflower bit-rate are usually requested first to quickly fill up the buffer. Thebuffer of an ABS client is between 5 and 30 seconds long. The goal of anABS algorithm is to provide a client with the highest possible bit-rate frag-ments. This starts during or after filling the buffer. Based on the networkconditions, the bit-rate of the fragments will converge to a bit-rate that isbottlenecked either by a server or a client.

    The client is left in charge of deciding what bit-rate fragments to request,resulting in an increase of server-side scalability. In HTTP ABS the client isthe only stateful entity. This allows the client to switch from one statelessserver to another without additional logic required on the server. This comesfrom the fact that segment requests are independent of each other. For ex-ample when congestion to one server increases a client can decide to insteadstart requesting segments from another server.

    Currently, Adaptive bit-rate streaming is used in many video contentdelivery solutions, both commercial and open-source. The most commonAdaptive Bit-rate Streaming formats are HLS, HSS, HDS and MPEG-DASH,an ISO-standard. Details on these formats will be described next.

    2.1 Apple HLS

    Apple HTTP Live Streaming (HLS) is the Apple format of the adaptive bit-rate streaming technique[7]. Apple packs baseline 3.0 encoded H.264 videocontent with one of three audio types (MP3, HE-AAC and AAC-LC) in aMPEG-2 Transport Stream. A segmenter subdivides the transport stream in10 second parts. These parts are indexed into a file that keeps references tothe fragmented files. HLS is the only major system that uses the MPEG-2-TS container instead of a ISO Base Media File Format (ISOBMFF). HLS isthe only major system implementing the MPEG-2-TS container. This mightbe because in comparison to a MPEG-4 part-12 container this MPEG-2-TSadds an approximate 24% overhead in relation to audio video data.[8].

    8

  • CHAPTER 2. ADAPTIVE BIT-RATE STREAMING 9

    2.2 Microsoft HSS

    Microsoft delivers Adaptive Bit-rate Streaming, HTTP Smooth Streaming(HSS), using the Protected Inter-operable File Format (PIFF), which is basedon the MPEG 4-Part 12 format[9]. The meta data and references to videosegments are stored into a XML-formatted Manifest. The supported videocodecs are VC-1 advanced and H.264 (Baseline, Main and High). The PIFFmedia container is audio codec agnostic in theory but only has two supportedaudio types: WMA and AAC. The typical PIFF segment length is 2 seconds.

    2.3 Adobe HDS

    Adobe first used Real Time Messaging Protocol (RTMP) for its own videostreaming, later it developed the HTTP Dynamic Streaming (HDS) to beintegrated in their Flash player infrastructure. The HDS protocol deliversvideo content using the F4V file format, which is based on the MPEG-4 part12 format. The F4V file finds its origin in the original Adobe file format FLV,which was extended to comply with the MPEG-4 part 12 specification. Thecontainer supports H.264 video encoding with either MP3 or AAC audio[10]and has a default length of 4 seconds.

    2.4 MPEG-DASH

    Dynamic Adaptive Streaming over HTTP (DASH) is the international stand-ard for Adaptive HTTP Streaming since November 2011[11]. The deploy-ment of the standard was meant to provide a universal solution for the marketto rely on, unifying a landscape divided by vendor published solutions likeHLS, HSS and HDS. Because DASH focuses on providing a protocol imple-mentation as universal as possible, the protocol allows for both MPEG-4 part12 derived or MPEG-2 TS containers. DASH’s container agnosticism meansthat DASH is both video and audio codec agnostic. The acceptance of Ul-tra High Definition hardware and video content means DASH’s agnosticismcan implement new encodings to support Ultra High Definition and onward.DASH should easily be able to implement the H.265 encoding which claimsto offer up to 40% of bit-rate reduction for the same quality and resolutionas H.264[12], but also offers encoding for higher resolutions than H.264. Aswith the other protocol implementations, the meta-data and the segment

    9

  • 10 CHAPTER 2. ADAPTIVE BIT-RATE STREAMING

    URLs for the different qualities and codecs for video-content are stored in aManifest file, which in DASH’s case is the Media Presentation Description(MPD) (as seen in Figure 2.1, an XML file). The DASH specification doesnot specify a segment length which means picking a length is done by theimplementing party.

    Figure 2.1: Graphical representation of the DASH MPD manifest2. Thisshows the option for a video player on the client side to pick different bit-rate segments.

    2Source: http://www.cablelabs.com/adaptive-bitrate-and-mpeg-dash/

    10

    http://www.cablelabs.com/adaptive-bitrate-and-mpeg-dash/

  • CHAPTER 2. ADAPTIVE BIT-RATE STREAMING 11

    2.5 Side by side

    Company Name Year Audio Video Container Default Lengths Manifest

    Microsoft HSS 2008 WMA & AAC VC-1 & H.264 PIFF* 2 sec XMLApple HLS 2009 MP3 & (HE-)AAC H.264 MPEG-2 TS 10 sec M3U8Adobe HDS 2010 MP3 & AAC H.264 F4V* 4 sec XMLMPEG DASH 2011 Any Any Any Any XML

    Table 2.1: Several Adaptive Bit-rate Streaming implementation specifica-tions side by side, (*MPEG-4 part 12 derived container).

    Even though the available technologies developed are very similar, thereare a few things to consider when choosing what technology to use for hostingcontent. Because DASH was developed with the input of the companies thatmade the original implementations, updates and support for Adobe’s HDSand Microsoft’s HTTP Smooth streaming seem to be declining. These com-panies are now actively involved in the development and adoption process ofthe ISO/IEC Standard DASH.3

    One possible reason Apple’s HLS still has the support of its companyand its users is that HLS, like DASH, does not require side mechanisms,and can be deployed on regular HTTP servers like Apache, NGINX, IIS, etc.For HSS the segment request URLs are not URI compliant and thus needtranslation when running on regular HTTP server. For example the URL ofa HLS segment looks like this:

    http://example.net/content.ism/content-audio_eng=-video_eng=-.ts

    This URL can point to a file on the file system of a regular HTTP server.The URL of a HSS segment which looks like this:

    http://example.net/content.ism/QualityLevels()/

    Fragments(video=)

    The URL has to be translated into a file byte range pointing to a fragmen-ted MP4 file, which is a feature not supported on regular HTTP servers. To

    3http://dashif.org/members/

    11

    http://example.net/content.ism/content-audio_eng=-video_eng=-.tshttp://example.net/content.ism/content-audio_eng=-video_eng=-.tshttp://example.net/content.ism/QualityLevels()/Fragments(video=)http://example.net/content.ism/QualityLevels()/Fragments(video=)http://dashif.org/members/

  • 12 CHAPTER 2. ADAPTIVE BIT-RATE STREAMING

    support this translation Microsoft has outfitted their own HTTP web serverimplementation IIS with the media extension Smooth Streaming. Adobehas implemented a similar side mechanism called HTTP Origin Module forApache, which like HSS, translates requests to file byte ranges.

    Apple documented HLS as an Internet Draft at the IETF, and updatesthe draft regularly, a push from Apple to have the IETF standardize HLS asan RFC has been absent though.

    Digital Rights Management (DRM), a way of encrypting encoded videocontent, is best supported on DASH. The Common Encryption Scheme(CENC) implemented in DASH specifies standard encryption that is de-cryptable by one or more digital rights management systems. This allowsDASH to, for example, serve both to Android and iOS players by encryptingfor Microsoft’s PlayReady and Verimatrix Encryption from a single contentstream, this makes multiple video player support easier.

    DASH might seem like the obvious way to go, but DASH players can betroublesome, most notably on the desktop. DASH has proprietary players ofwhich some already had a client base:

    • Theoplayer, A HTML5 based video streaming player, that since the25th of august 2015, now also support DASH, besides already support-ing HLS4.

    • JWplayer, A multi platform HTML5 and Flash based video streamingplayer, that with the release of version 7.0.0 on 20 July 2015 supportsthe DASH standard5.

    • BitDASH, A dedicated DASH HTML5 based video streaming player,which on first release supported DASH only, and later added HLS sup-port6.

    But there are also open-source alternatives:

    • DASH.js, A reference client implementation for the playback of MPEG-DASH content using client-side JavaScript libraries. The implementa-tion is the result of an initiative by the DASH Industry Forum, a group

    4https://www.theoplayer.com/5http://www.jwplayer.com/6http://www.dash-player.com/

    12

    https://www.theoplayer.com/http://www.jwplayer.com/http://www.dash-player.com/

  • CHAPTER 2. ADAPTIVE BIT-RATE STREAMING 13

    with the purpose of growing DASH’s market share[13]. The referenceclient is still in active development and a large refactor as of Octo-ber 30th 2015 extended its scalability and modularity. The DASH.jsproject has multiple in use forks, for example:

    – castLabs have an open-source implementation called DASH.aswith limited functions in relation to their professional line of videoclient implementation DASH Everywhere7.

    – Google, the Shaka player, a player optimized for low bandwidthvideo streaming8.

    – Vualto, who implemented the DASH.js reference player into theirworkflow around August 20149.

    • DASHplay (by RTL NL), RTL, Triple IT and Unified Streaming’s open-source attempt at building the best MPEG-DASH player[14].

    All of the client implementations miss parts to make them the best choice.Some do not operate well with live video, others do not fully implement sub-titles; none of these clients fully implement every necessary service.

    DASH players for devices from SDK’s like Inside Secure10, VisualON11

    or Nexstreaming12 behave better but these tend to be expensive. With therefactoring of DASH.js to version 2.0, a more complete open-source DASHplayer might become available.

    7https://github.com/castlabs/dashas8https://github.com/google/shaka-player9http://www.vualto.com/i-dont-even-like-mustard/

    10http://www.insidesecure.com/Products-Technologies/Embedded-Software-

    Solutions/DRM-Fusion-Agent-for-Downloadable-Deployment11http://visualon.com/onstream-mediaplayer/12http://www.nexstreaming.com/products

    13

    https://github.com/castlabs/dashashttps://github.com/google/shaka-playerhttp://www.vualto.com/i-dont-even-like-mustard/http://www.insidesecure.com/Products-Technologies/Embedded-Software-Solutions/DRM-Fusion-Agent-for-Downloadable-Deploymenthttp://www.insidesecure.com/Products-Technologies/Embedded-Software-Solutions/DRM-Fusion-Agent-for-Downloadable-Deploymenthttp://visualon.com/onstream-mediaplayer/http://www.nexstreaming.com/products

  • CHAPTER 3

    Infrastructure

    3.1 Requirements

    A good video streaming experience for users relies on a few important con-ditions:

    • Avoid buffer underruns, because when the buffer is empty the videoplayback will be interrupted[4].

    • Refrain from oscillating in video quality. This negatively affects theperceived quality[15][16].

    • Make use of all the available bandwidth to offer the highest possiblevideo bit-rate so the user gets the best possible video quality[4].

    When we take these user-focused conditions into consideration in relation towhat a server has to provide, the following server conditions have to be met.

    • A server needs to offer continuity of bandwidth.

    • A server has to provide stable bandwidth.

    • A server’s throughput has to approximate the total user throughput.This allows each client to have their throughput approach their band-width, ensuring best possible video streaming quality.

    14

  • CHAPTER 3. INFRASTRUCTURE 15

    The amount of bandwidth a user needs depends on the encoding of themedia, the bit-rate, and the container in which the media is transported. Net-flix which implements the MP4 profile of the DASH protocol recommends thefollowing bandwidths to their users for the different qualities of streaming1:

    • Netflix states 0.5 Mb/s as the minimum speed to be able to stream anyvideo at all.

    • 1.5 Mb/s is the recommended speed to stream a minimum quality video.

    • For SD (Standard Definition) 3.0 Mb/s is the required bandwidth. SDmeans the video has a resolution of at least 480p.

    • For HD (High Definition), 5.0 Mb/s is the required bandwidth. HDhaving the HD-ready specification of at least 720p.

    • Netflix started supporting UHD in 2014, with a minimum bandwidthof 25 Mb/s. UHD has a resolution of 2160p.

    To provide every user with a good video streaming experience, Netflix has50+ encodings per video, one for each specific platform and bandwidth.

    3.2 Set-ups

    The Server-side functionality of Adaptive Bit-rate Streaming is providedby an origin. An origin either creates the content just-in-time or offloadspre-packaged content. The simplest set-up consists of a single origin serverserving clients.

    Clients Origin

    HTTP GET requests

    ABS data

    Figure 3.1: A diagram showing a simple origin set-up.

    The set-up described in Figure 3.1 could never serve a large number ofclients because the single server would easily be overwhelmed. This is why aset-up usually consists of multiple origins over which clients are distributed

    1https://help.netflix.com/en/node/306

    15

    https://help.netflix.com/en/node/306

  • 16 CHAPTER 3. INFRASTRUCTURE

    by a load-balancer. Next to increasing the number of origins, caches canbe used to decrease traffic to the origins. Alternatively a Content DeliveryNetwork (CDN) may be used to scale. CDNs can be put in front of an originto decrease traffic to the origin.

    Figure 3.2: Diagram illustrating a set-up using two CDNs to serve clients.

    CDNs can be as small as a single edge node or as large as the CDNs ofcompanies like Akamai. Akamai’s CDNs provide 15 to 30% of the world’sInternet content[17].

    An example of how video content is hosted is the way Netflix uses CDNsfrom several companies to distribute their video content. When new contenthas to be distributed, a single upload into a single CDN is enough to distrib-ute the content over the global Netflix CDN infrastructure. Except for thevideo-content, almost everything is hosted on the Amazon Cloud. Here thecontent ingestion, log recording/analysis, DRM, CDN routing, user sign-in,and mobile device support are done. The Amazon cloud is responsible forgenerating a list of CDNs for a client’s location. A manifest containing thebest CDNs and the available video/audio bit-rates for the video is gener-

    16

  • CHAPTER 3. INFRASTRUCTURE 17

    ated. The player, which is either provided by the Amazon Cloud or alreadyinstalled on the client’s device, then plays it using the adaptive bit-rate pro-tocol implementation[18].

    A CDN typically has a load balancer and a number of edges that areservers, over which the requests for content will be divided. A request isredirected to an edge based on the distance from an edge to the clients inhops, in time or the historic/current performance of the edge. An optionalweb-cache is often implemented to decrease loading times on recently/mostrequested content[18].

    3.3 Performance measurements

    Because a CDN as a collective is very hard to fully stress, a meaningful andeasier way to test performance could be to stress and measure the capabilitiesof individual edges and so estimate the capabilities of a CDN. To make surethe test performance is purely the result of the edge’s effort, it is importantto have no web caching active. Web caching is a popular mechanism in CDNsto cost effectively handle a large amount of load to a small amount of content.

    Network performance tests consist of a number of possible measurements,from which a test can pick the relevant ones to get data that is relevant.Popular options for these tests are:

    • Bandwidth; the maximum rate at which data can be transferred.

    • Throughput; the actual rate at which data is transferred.

    • Latency; the delay between the sender sending a packet and the re-ceiver decoding it. Latency is mainly the result of a signal’s travel timecombined with processing times at the nodes.

    • Jitter; the variation in latency at the receiving end of the data.

    • Error rate; the number of corrupted bits expressed as a percentage orfraction of the total sent.

    Jitter can be used as an indicator for server stress, in the sense that a highamount of jitter indicates that a server is not consistently handling requests.

    17

  • 18 CHAPTER 3. INFRASTRUCTURE

    In ABS though the buffer of the ABS video player stops jitter from being aproblem.

    While many papers research single client implementations, there is littleknown about the amount of clients that can be serviced from one server.Single client performance research consists of measurements of bandwidth inrelation to the throughput that is utilized by the client [4][3]. Server per-formance tests should at least include a throughput to the server as one ofthe measurements.

    With the introduction of the HTTP/1.1 protocol the performance of theupdated protocol was measured with HTML page loads versus HTTP/1.0page loads[19]. Data measured for this test was transmitted bytes, sentpackets and relative overhead put against running time for the page load.The updated HTTP/2.0 is likely to have similar papers published with theexpected adoption of the new protocol. HTTP/2.0 had its specification pub-lished in RFC7540[20].

    18

  • CHAPTER 4

    Tensor

    The theory behind Adaptive Bit-rate Streaming and network performancemeasurements was used to build a tool that helps system engineers determinethe capabilities of a streaming set-up. The name of the tool that was madeis Tensor, an ABS specific load testing tool.

    4.1 Requirements

    Tensor requires a number of things to be a satisfactory ABS specific loadtesting tool, namely:

    • An ABS specific load generating component.

    • An interface which collects an origin’s performance metrics.

    • A user interface which collects and displays measurements.

    The Tensor project is open-source, so any tool or framework that is used inbuilding the tool has to be open-source as well.

    19

  • 20 CHAPTER 4. TENSOR

    4.2 Design

    While looking for a load generator the decision was made to look for onewith as little as possible overhead. This decision was made to assure the loadgenerator would never be the bottleneck of the load generating capabilitiesof a server.

    Light GUI Multiple URLsApacheBench Yes No NoHTTPerf Yes No NoJmeter No Yes YesWRK Yes No Yes

    Table 4.1: List of open-source HTTP load generators, with relevant features.

    Apache bench, HTTPerf and WRK are programs that can exclusively berun from the command-line. They were all built with C. Jmeter however isbuilt using Java and has a GUI. While Apache bench, HTTPerf and WRKfocus on generating as much requests as possible, Jmeter tries to more closelysimulate actual user interactions. Because of WRK’s LUA API, WRK alsoallows for a degree of virtual user simulations. Jmeter was ultimately dis-carded because of its relative overhead over the C-based load generators.Because WRK supports multiple URLs as the only C-based load generator,WRK was chosen as the load generating part of Tensor. WRK is able togenerate high load and many requests from a single source, which is usefulbecause Tensor is hosted from a single server. WRK is designed as a modernHTTP benchmarking tool, combining a multi threaded design with scalableevent notification systems like epoll[21] and kqueue[22].

    To make WRK accessible to the frontend, WRK was attached to thePython Flask web-framework as an API. Flask was chosen for its ability toeasily and quickly deploy an API. The open-source Flask micro framework iswritten in Python and is based on Werkzeug, Jinja2 and what they call goodintentions[23]. The code is published under the BSD license. Apart from thePython Standard Library the framework has no further dependencies. Flaskhas the “micro framework” classification because it doesn’t implement thefull data storage to frontend pipeline and aims to keep the core small and ex-tensible. Services offered by the framework include: a built-in HTTP server,support for unit testing, and RESTful web service[24]. Features that are

    20

  • CHAPTER 4. TENSOR 21

    missing can be added using extensions. For example, the Flask-SQLAlchemymodule fully extends the Flask framework with the Python SQLAlchemytoolkit. Flask implements only a small number of extra modules and givesthe opportunity to remove the modules that are implemented. The Jinja2templating engine for example can be switched out for an engine more to aengineer’s preference.

    There is a large number of network monitoring systems that fill the re-quirements of being open source and reachable externally. But one monitor-ing system is used by open-source the Netflix Vector framework which wasreleased in August 2015. Vector collects the measurements, from the networkmonitoring system Performance Co-Pilot (PCP), in a Javascript frontend.The framework was built to easily be extended to incorporate measurementsnot only from PCP but also from other sources. Because of its extensibilityand its integration with a monitoring system Netflix Vector was chosen asthe frontend framework of Tensor and so PCP was chosen as the networkmonitoring system.

    Angular, the framework Vector is built on, is an open-source Javascriptframework and part of the MEAN full-stack framework as its frontend mod-ule1. Angular’s imperative is to help the development of single page webapplications, providing a simplified design and testing platform.

    4.3 Implementation

    In Tensor, the Flask framework is used mainly to serve as an API for thefrontend, but also acts as the access point for a client to retrieve the Frontend. The tool displays measurements of CPU, HDD and memory usagequeried from an origin running the Performance Co-Pilot framework. WRKis used to acquire measurements of throughput, latency and segments to anorigin.

    1https://github.com/linnovate/mean

    21

    https://github.com/linnovate/mean

  • 22 CHAPTER 4. TENSOR

    WRK

    PING

    InitWRK

    Load

    REST API

    Angular UIDASHBoard

    Charts Datamodels Metrics Services

    Javascript Angular web-app

    View

    Flask

    Tensor

    Apache:80

    Origin:80:44323

    Video Hosting Server

    HTTP Server PCP API

    Figure 4.1: Diagram showing the design of Tensor and how it interfaces witha video hosting server.

    22

  • CHAPTER 4. TENSOR 23

    4.3.1 Backend

    View

    The view is the simplest part of the whole project. No server side templatingis required because all templating logic is done in the Angular frontend. Oncea blueprint has been defined, providing an access point in Flask takes threelines. The root of the web-application is defined as the function home() whichreturns index.html a static HTML page which is generated during the web-application’s start-up. The complete view code can be seen in main.py below.

    ##main.py##

    from flask import render_template, Blueprint

    main_blueprint = Blueprint(’main’, __name__, url_prefix=’’)

    @main_blueprint.route(’/’, methods=[’GET’])

    def home():

    return render_template(’index.html’, data={})

    REST API

    The API provides the frontend with metrics of the load put on the mediahosting server by Tensor, as shown using the edges to the right of Figure4.1. The tool that was used as the load generating part of the backend isWRK[25].

    To run, WRK requires a URL, a duration and the number of simultan-eous connections. WRK will then send requests to the URL for the durationspecified with the stated amount of connections. Afterwards it will outputstatistics about the network performance. WRK has a Lua scripting inter-face, LuaJIT, which allows additional logic for sending and receiving requests.The Lua scripting interface is used by the Tensor server to send requests formultiple segments of video(s) stored on the video hosting server. Requestsare constructed from the contents of a file that contains URLs for the differ-ent segments.

    Initial data is split using two tools: ping and WRK. Ping is run for 5seconds after the API call and the average is returned to the Angular app.

    23

  • 24 CHAPTER 4. TENSOR

    Because of ‘slow start’ in TCP WRK cannot be run against small files. AnHTML page for example would produce lots of requests resulting in too muchoverhead to have a high throughput. The solution is to load test against a50MB zip-file on the HTTP web-server (e.g. Apache). This throughput dataserves as a baseline for the later load testing against the ABS implementingmodule on the server.

    4.3.2 Performance Co-Pilot

    PCP is run on the origin and has an API listening to port 44323, shown inthe video hosting server part of Figure 4.1.

    Figure 4.2: Diagram illustrating the way Performance Co-Pilot distributeshost measurements to the Vector framework3.

    PCP’s Performance Metrics Co-ordinating Daemons (PMCDs) sit betweenmonitoring clients and Performance Metric Domain Agents (PMDAs) seenin Figure 4.2. The PMDAs are interfaces with functions that collect meas-urements from the different sources; e.g. An Application, mailq, a databaseor the kernel. Tensor connects to an integrated web API that communic-ates with a Web daemon monitoring client. The Web Daemon client inturn is connected to a Host’s PMCD. Clients first request a context usinga URI combined with /pmapi/context to access the bindings. The contextis maintained as long as the clients regularly reconnect to request PMAPIoperations.

    3http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-

    host.html

    24

    /pmapi/contexthttp://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.htmlhttp://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html

  • CHAPTER 4. TENSOR 25

    4.3.3 Frontend

    Frontend development was based on the open-source Netflix Vector frame-work (HTML/CSS/JS)[26]. The Netflix Vector framework makes use of theopen-source Angular Javascript framework. The Netflix Vector frameworkwas extended to shift its focus towards load metrics and away from systemmetrics.

    In Tensor the frontend is used to display the metrics retrieved from thePCP API on the origin or from the Tensor API. The frontend extends theangular app with both third party libraries and the projects source code.

    The files in the dashboard folder render the structure of the HTML pageand load the configured charts into the dashboard layout. The dashboardcontroller starts a number of services as soon as a URL is entered into thehostname input. The dashboard service retrieves the PCP context throughthe pmapi service. If the context is valid the dashboard service starts aninterval that retrieves measurements from the host every 5 seconds. Meas-urements are retrieved through the pmapi service and the wrkapi servicewhich update the metrics of PCP and WRK. The measurements traverseone of the abstraction layers in the metrics folder before measurements arepassed on to a datamodel. The abstraction layer converts raw metrics fromPCP into the metrics shown in the charts. All data shown in the charts isdirectly linked to their respective datamodels. The way charts are loadedinto the dashboard is very modular and allows for different metrics using thesame chart template.

    25

  • 26 CHAPTER 4. TENSOR

    Figure 4.3: Screenshot showing Tensor’s Frontend while running a load testwith sources of data overlayed.

    When Tensor is started as a web server the Javascript source code is com-piled into one file using Gulp. Gulp is a toolkit that handles automation indevelopment environments. Gulp automatically loads third party librariesinto Tensor’s javascript library. Figure 4.3 shows the Tensor frontend run-ning in a Firefox browser.

    26

  • CHAPTER 5

    Experiments

    As a part of Tensor’s testing the load testing tool was run against two originservers. Both servers have all their supported implementations tested witha single run of Tensor.

    A run in the experiment consists of five seconds of pinging to the server,followed by five seconds of load testing against a fifty MB zip file. ABSspecific load is applied at a click of the button for the range of one to twohundred concurrent connections. A run is canceled if the server crashes as aresult of the load.

    5.1 Testing setup

    5.1.1 Source

    The source is the server on which Tensor is hosted and from which the loadis generated. Initially two source servers were made ready for experiment-ing but one source was virtualized on the same machine as demo.unified-streaming.com and thus discarded. An High-Bandwidth Amazon EC2 in-stance remained as the sole source of load testing for the experiments.

    27

  • 28 CHAPTER 5. EXPERIMENTS

    ec2-XX-XX-XXX-XXX.eu-central-1.compute.amazonaws.com

    First a default Ubuntu AMI (Amazon Machine Instance) was initialized onan entry level EC2 instance. On this EC2 instance the necessary softwarerequired for hosting Tensor was installed. Then a snapshot was made of theEC2 instance state to make a custom AMI containing the Tensor tool. ThisAMI was then transfered from AMI storage to an EC2 instance with therequired network interface speed. The address shown in the section title isthe URL in which the X’s identify the virtual machine.

    Source compute.amazonaws.com

    IP 52.29.156.202Location Frankfurt, GermanyOS Ubuntu 14.04.2x64Kernel 3.13.0-48 (12-03-2015)CPU 36 vCPUs @ 2.9 GHz (Intel Xeon E5-2666v3)RAM 60 GBNetwork Interface Speed 10Gbit/s

    5.1.2 Video Hosting Servers

    usp.abewiersma.nl

    usp.abewiersma.nl is hosted from an entry level virtual machine hosted by Di-gitalOcean on their Amsterdam2 server location. The DigitalOcean hostingservice is maintained/provided by the Telecity group.

    Origin usp.abewiersma.nl

    IP 188.226.172.129Location Amsterdam, NetherlandsOS Ubuntu 12.04.3x64Kernel 3.8.0-29 (14-08-2013)CPU 1 vCore @ 2.4Ghz (Intel Xeon E5-2630L v2)RAM 512MBStorage 20GB SSDNetwork Interface Speed 1Gbit/sServer Software Apache/2.4.7ABS Implementation Unified Streaming Package 1.7.16

    28

  • CHAPTER 5. EXPERIMENTS 29

    demo.unified-streaming.com

    demo.unified-streaming.com is hosted from a virtual machine on a privatelyowned dedicated computer. The Unified Streaming team provided the spe-cifications of the server hardware and a document with the segment URLs.

    Origin demo.unified-streaming.com

    IP 46.23.86.207Location Kockengen, NetherlandsCpu 4 vCore @ 2.3sGhzRam 4GBNetwork Interface Speed 1Gbit/sServer Software Apache/2.2.22ABS Implementation Unified Streaming Package 1.7.17

    5.1.3 Video & Software

    Test Video

    The testing video from which fragments are requested is the open-sourceanimation movie, Tears of Steel. The movie was made to showcase the newenhancements and visual capabilities of Blender.

    The Unified Streaming Package

    The Unified streaming package supports the following ABS implementationsfrom one source:

    • HDS

    • HLS

    • HSS (also referred to as MSS)

    • DASH

    The Unified Origin package is developed by Unified Streaming and is underactive development.

    29

  • 30 CHAPTER 5. EXPERIMENTS

    5.2 Results

    Table 5.1 through 5.8 are single run per implementation results from theTensor load testing tool. In the runs the number of concurrent connectionsincreases by one every 5 seconds.

    5.2.1 Server: usp.abewiersma.nl

    Table 5.1: Summary of HDS load testing on usp.abewiersma.nl. For imagesof HDS load testing running see Appendix A.1

    HDS Init 20:30:35 at 25 connections Failure

    Ping 6.90ms - -Connections 1 25 80-90Throughput 47.69 MB/s 110MB/s 0 MB/sSegments/s - 155 0CPU Utilization 0% 50% 100%Disk Utilization 0% 2-5% 35-40%Memory Utilization 140MB 240MB 512MB

    Table 5.2: Summary of HLS load testing on usp.abewiersma.nl. For imagesof HLS load testing running see Appendix A.2

    HLS Init 20:47:05 at 25 connections Failure

    Ping 6.86ms - -Connections 1 25 70Throughput 50.45 MB/s 95MB/s 0 MB/sSegments/s - 300 0CPU Utilization 0% 60% 100%Disk Utilization 0% 0% 30%Memory Utilization 120MB 300MB 512MB

    30

    usp.abewiersma.nlusp.abewiersma.nl

  • CHAPTER 5. EXPERIMENTS 31

    Table 5.3: Summary of HSS load testing on usp.abewiersma.nl. For imagesof HSS load testing running see Appendix A.3

    HSS Init 22:09:40 at 25 connections Failure

    Ping 6.86ms - -Connections 1 25 80Throughput 51.95 MB/s 110MB/s 0 MB/sSegments/s - 310 0CPU Utilization 0% 50% 100%Disk Utilization 0% 2-5% 10%Memory Utilization 190MB 250MB 512MB

    Table 5.4: Summary of DASH load testing on usp.abewiersma.nl. Forimages of DASH load testing running see Appendix A.4

    DASH Init 22:38:35 at 25 connections Failure

    Ping 7.47ms - -Connections 1 25 90Throughput 50.64 MB/s 110MB/s 0 MB/sSegments/s - 300 0CPU Utilization 0% 50% 100%Disk Utilization 0% 15% 35-40%Memory Utilization 130MB 240MB 512MB

    Discussion

    Load testing showed unanimous failure at point the memory is completelyfilled, after which the CPU usage goes to 100%. HLS failed with both lowerthroughput and less connections than the other implementations. Initialthroughput measurements consistently came out lower than the load testingthroughput measurements. HLS does not reach a throughput of 110 MB/swhere the other implementations do. The 110 MB/s is close to the networkinterface speed of the server: 1 Gbit/s=125MB/s.

    31

    usp.abewiersma.nlusp.abewiersma.nl

  • 32 CHAPTER 5. EXPERIMENTS

    5.2.2 Server: demo.unified-streaming.com

    Table 5.5: Summary of HDS load testing on demo.unified-streaming.com.For images of HDS load testing running see Appendix B.1

    HDS Init 22:37:00 at 70 connections End

    Ping 7.63ms - -Connections 1 70 200Throughput 11.58 MB/s 90MB/s 90 MB/sSegments/s - 54 45CPU Utilization 5% 25% 25%Disk Utilization 0% 15% 20%Memory Utilization 1000MB 1250MB 1500MB

    A pronounced 30 errors (timeouts) per second was recorded nearing theend of the HDS test.

    Table 5.6: Summary of HLS load testing on demo.unified-streaming.com.For images of HLS load testing running see Appendix B.2

    HLS Init 23:20:02 at 70 connections End

    Ping 7.34ms - -Connections 1 70 200Throughput 8.90 MB/s 85MB/s 70 MB/sSegments/s - 325 280CPU Utilization 4% 45% 35%Disk Utilization 0% 10% 10%Memory Utilization 1050MB 1250MB 1500MB

    32

    demo.unified-streaming.comdemo.unified-streaming.com

  • CHAPTER 5. EXPERIMENTS 33

    Table 5.7: Summary of HSS load testing on demo.unified-streaming.com.For images of HSS load testing running see Appendix B.3

    HSS Init 00:31:19 at 70 connections End

    Ping 7.71ms - -Connections 1 70 200Throughput 26.57 MB/s 90MB/s 75 MB/sSegments/s - 220 180CPU Utilization 5% 35% 30%Disk Utilization 0% 10% 15%Memory Utilization 1000MB 1250MB 1400MB

    Table 5.8: Summary of DASH load testing on demo.unified-streaming.com. For images of DASH load testing running see Appendix B.4

    DASH Init 00:51:33 at 70 connections End

    Ping 7.60ms - -Connections 1 70 200Throughput 20.02 MB/s 85MB/s 75 MB/sSegments/s - 205 180CPU Utilization 4% 30% 30%Disk Utilization 5% 15% 15%Memory Utilization 1000MB 1100MB 1400MB

    Discussion

    The initial throughput measurements seem random for every time the tool isrun. These measurements should be consistent because they measure to thesame file on the same server with each run. The Tensor web application onlygoes up to 200 concurrent connections, which is why every load test endsat 200 connections. During every implementation load test the throughputgoes down as the number of concurrent connections goes past steady state.

    33

    demo.unified-streaming.comdemo.unified-streaming.comdemo.unified-streaming.com

  • CHAPTER 6

    Conclusion

    In this thesis the goal was to find a meaningful way to do Adaptive Bit-rateStreaming load testing. The tool that was made for this purpose, Tensor,performs good at load testing. However for now the baseline testing of thethroughput to the server is not representative and thus non-functional.

    The results show that the demo server, provided by the Unified Streamingteam, does not reach its maximum network interface speed of 1Gbit/s. Onthe other hand the demo server should be able to support more concurrentusers than the DigitalOcean cloud-server due to a higher memory capacity.With about 200 concurrent users the average throughput per user over a lessthan 1 Gbit link is lower than the Netflix recommended 5.0 Mbit/s for HDstreaming. A load higher than 200 concurrent HD streaming clients mightstop the clients from receiving their optimal video bit-rate quality.

    The Digital Ocean cloud server is limited by the relatively small memorythat it has available and completely stalls as the number of concurrent con-nections reaches about 90. For every connection Apache spawns a process tomaintain this connection and every process uses a small amount of memory.This causes the memory to fill up to the point that swapping starts and con-nection attempts get dropped.

    34

  • CHAPTER 6. CONCLUSION 35

    The Adaptive Bit-rate Streaming protocol implementations perform fairlysimilarly with a few exceptions:

    • The Apple HLS protocol performs worst at achieving the allotted 1Gbit/sthroughput.

    • The Adobe HDS protocol uses the least amount of segments for stream-ing, and therefore suffers from a lot of packages that get timed out.

    6.1 Future Work

    As a reference to load testing it is important to finish/tweak the baselinethroughput testing. Without the server-specifications that were known be-forehand estimation of the bandwidth to the servers would have been im-possible. Currently as a way to estimate the bandwidth a single WRK con-nection requests a 50MB file. When testing this set-up Tensor was run froma laptop development environment and in this instance WRK gave similarresults to IPERF. Testing of the set-up should have been more thorough toassure its accuracy.The tool still suffers from infancy bugs like:

    • A bug that makes the tooltips of the last nodes in a graph unreadablebecause the next graph clips over the tooltip.

    • Drag and drop of the graph widgets is broken.

    35

  • Bibliography

    [1] Cisco Visual Networking Index Cisco. Cisco Visual Networking Index:Forecast and Methodology, 2014–2019. white paper, 2015.

    [2] J. Postel. User Datagram Protocol, 8 1980. RFC768.

    [3] Saamer Akhshabi, Ali C Begen, and Constantine Dovrolis. An experi-mental evaluation of rate-adaptation algorithms in adaptive streamingover HTTP. In Proceedings of the second annual ACM conference onMultimedia systems, pages 157–168. ACM, 2011.

    [4] Haakon Riiser, H̊akon S Bergsaker, Paul Vigmostad, P̊al Halvorsen, andCarsten Griwodz. A comparison of quality scheduling in commercialadaptive HTTP streaming solutions on a 3g network. In Proceedings ofthe 4th Workshop on Mobile Video, pages 25–30. ACM, 2012.

    [5] P Pegus, Emmanuel Cecchet, and Prashant Shenoy. Video BenchLab:an open platform for realistic benchmarking of streaming media work-loads. In Proc. ACM Multimedia Systems Conference (MMSys), Port-land, OR, 2015.

    [6] Stefan Lederer, Christopher Müller, and Christian Timmerer. Dynamicadaptive streaming over HTTP dataset. In Proceedings of the 3rd Mul-timedia Systems Conference, pages 89–94. ACM, 2012.

    36

  • BIBLIOGRAPHY 37

    [7] Apple Inc. HTTP Live Streaming. IETF draft, Novem-ber 2015. https://tools.ietf.org/pdf/draft-pantos-http-live-streaming-18.pdf.

    [8] Haakon Riiser, P̊al Halvorsen, Carsten Griwodz, and Dag Johansen.Low overhead container format for adaptive streaming. In Proceedingsof the first annual ACM SIGMM conference on Multimedia systems,pages 193–198. ACM, 2010.

    [9] Microsoft. Smooth Streaming Protocol. [MS-SSTR] - v20150630, June2015. http://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/[MS-SSTR].pdf.

    [10] Adobe Systems Incorporated. HTTP dynamic streaming specific-ation. Version 3.0 FINAL, August 2013. http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-

    specification.pdf.

    [11] ISO/IEC. Information technology – Dynamic adaptive streaming overHTTP (dash) Part 1. Technical Report ISO/IEC 23009-1:2014, Inter-national Organization for Standardization, Geneva, Switzerland, 2014.

    [12] Dan Grois, Detlev Marpe, Amit Mulayoff, Benaya Itzhaky, and OferHadar. Performance comparison of H. 265/MPEG-HEVC, VP9, and H.264/MPEG-AVC encoders. In Picture Coding Symposium (PCS), 2013,pages 394–397. IEEE, 2013.

    [13] DASH-IF. DASH.js. https://github.com/Dash-Industry-Forum/dash.js/wiki.

    [14] RTL NL. DASHplay. http://dashplay.org/.

    [15] Pengpeng Ni, Alexander Eichhorn, Carsten Griwodz, and P̊al Halvorsen.Fine-grained scalable streaming from coarse-grained videos. In Proceed-ings of the 18th international workshop on Network and operating sys-tems support for digital audio and video, pages 103–108. ACM, 2009.

    [16] Michael Zink, Oliver Künzel, Jens Schmitt, and Ralf Steinmetz. Sub-jective impression of variations in layer encoded videos. In Quality ofService—IWQoS 2003, pages 137–154. Springer, 2003.

    37

    https://tools.ietf.org/pdf/draft-pantos-http-live-streaming-18.pdfhttps://tools.ietf.org/pdf/draft-pantos-http-live-streaming-18.pdfhttp://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/[MS-SSTR].pdfhttp://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/[MS-SSTR].pdfhttp://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-specification.pdfhttp://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-specification.pdfhttp://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-specification.pdfhttps://github.com/Dash-Industry-Forum/dash.js/wikihttps://github.com/Dash-Industry-Forum/dash.js/wikihttp://dashplay.org/

  • 38 BIBLIOGRAPHY

    [17] Anya George Tharakan and Subrat Patnaik. Strong dollar hurtsAkamai’s profit forecast, shares fall. reuters, April 2015.

    [18] Vijay Kumar Adhikari, Yang Guo, Fang Hao, Matteo Varvello, VolkerHilt, Moritz Steiner, and Zhi-Li Zhang. Unreeling Netflix: Understand-ing and improving multi-cdn movie delivery. In INFOCOM, 2012 Pro-ceedings IEEE, pages 1620–1628. IEEE, 2012.

    [19] Henrik Frystyk Nielsen, James Gettys, Anselm Baird-Smith, EricPrud’hommeaux, H̊akon Wium Lie, and Chris Lilley. Network perform-ance effects of HTTP/1.1, CSS1, and PNG. In ACM SIGCOMM Com-puter Communication Review, volume 27, pages 155–166. ACM, 1997.

    [20] Ed. M. Thomson. Hypertext Transfer Protocol Version 2 (HTTP/2).RFC 7540, Internet Engineering Task Force (IETF), May 2015.

    [21] epoll(7) Linux User’s Manual, May 2015.

    [22] Jonathan Lemon. Kqueue-A Generic and Scalable Event NotificationFacility. In USENIX Annual Technical Conference, FREENIX Track,pages 141–153, 2001.

    [23] Armin Ronacher. Flask. https://github.com/mitsuhiko/flask,http://flask.pocoo.org/docs/, 2015.

    [24] Miguel Grinberg. Flask Web Development: Developing Web Applica-tions with Python. ” O’Reilly Media, Inc.”, 2014.

    [25] Will Glozer. WRK Modern HTTP benchmarking tool. https://github.com/wg/wrk, 2015.

    [26] Netflix Inc. Vector. https://github.com/Netflix/vector, 2015.

    [27] Didier J LeGall. MPEG (Moving Pictures Expert Group) video com-pression algorithm: a review. In Electronic Imaging’91, San Jose, CA,pages 444–457. International Society for Optics and Photonics, 1991.

    38

    https://github.com/mitsuhiko/flaskhttp://flask.pocoo.org/docs/https://github.com/wg/wrkhttps://github.com/wg/wrkhttps://github.com/Netflix/vector

  • Appendices

    39

  • APPENDIX A

    Results usp.abewiersma.nl

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    A.1 HDS

    Figure A.1: This figure shows the benchmarking of HDS on usp.abewiersma.nl reaching 25 connections at the red marker at which point thethroughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    Figure A.2: This figure shows the moment during the benchmarking of HDS on usp.abewiersma.nl at which failure has occured.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    A.2 HLS

    Figure A.3: This figure shows the benchmarking of HLS on usp.abewiersma.nl reaching 25 connections at the red marker at which point thethroughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    Figure A.4: This figure shows the moment during the benchmarking of HLS on usp.abewiersma.nl at which failure has occured.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    A.3 HSS

    Figure A.5: This figure shows the benchmarking of HSS on usp.abewiersma.nl reaching 25 connections at the red marker at which point thethroughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    Figure A.6: This figure shows the moment during the benchmarking of HSS on usp.abewiersma.nl at which failure has occured.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    A.4 DASH

    Figure A.7: This figure shows the benchmarking of DASH on usp.abewiersma.nl reaching 25 connections at the red marker at which point thethroughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    A.RESULTSUSP.A

    BEW

    IERSMA.N

    L

    Figure A.8: This figure shows the moment during the benchmarking of DASH on usp.abewiersma.nl at which failure has occured.

  • APPENDIX B

    Results demo.unfied-streaming.com

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    B.1 HDS

    Figure B.1: This figure shows the benchmarking of HDS on demo.unified-streaming.com reaching 70 connections at the red marker at which pointthe throughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    Figure B.2: This figure shows the end of benchmarking of HDS on demo.unified-streaming.com the point at which 200 concurrent connections havebeen run.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    B.2 HLS

    Figure B.3: This figure shows the benchmarking of HLS on demo.unified-streaming.com reaching 70 connections at the red marker at which pointthe throughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    Figure B.4: This figure shows the end of benchmarking of HLS on demo.unified-streaming.com the point at which 200 concurrent connections havebeen run.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    B.3 HSS

    Figure B.5: This figure shows the benchmarking of HSS on demo.unified-streaming.com reaching 70 connections at the red marker at which pointthe throughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    Figure B.6: This figure shows the end of benchmarking of HSS on demo.unified-streaming.com the point at which 200 concurrent connections havebeen run.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    B.4 DASH

    Figure B.7: This figure shows the benchmarking of DASH on demo.unified-streaming.com reaching 70 connections at the red marker at whichpoint the throughput to the server remains approximately the same whilst concurrent connections keep increasing.

  • APPENDIX

    B.RESULTSDEMO.U

    NFIE

    D-STREAMIN

    G.COM

    Figure B.8: This figure shows the end of benchmarking of DASH on demo.unified-streaming.com the point at which 200 concurrent connectionshave been run. Note that the dip in throughput is not considered failure because the server recovers.

  • APPENDIX C

    Glossary

    MPEG stands for Moving Pictures Expert Group, a group that sets standards for audio/videocompression and transmission[27].IETF stands for Internet Engineering Task Force, a group with the goal to make the Internetwork better. The group publishes documents called RFC’s(Request for Comments), thatdescribe parts of the Internet and gives opinions on what the best practices are.

    IntroductionRelated work

    Adaptive Bit-rate StreamingApple HLSMicrosoft HSSAdobe HDSMPEG-DASHSide by side

    InfrastructureRequirementsSet-upsPerformance measurements

    TensorRequirementsDesignImplementationBackendPerformance Co-PilotFrontend

    ExperimentsTesting setupSourceVideo Hosting ServersVideo & Software

    ResultsServer: usp.abewiersma.nlServer: demo.unified-streaming.com

    ConclusionFuture Work

    AppendicesAppendix Results usp.abewiersma.nlHDSHLSHSSDASH

    Appendix Results demo.unfied-streaming.comHDSHLSHSSDASH

    Appendix Glossary