13
Multi-User Multi-Task Computation Offloading in Green Mobile Edge Cloud Computing Weiwei Chen , Dong Wang, and Keqin Li , Fellow, IEEE Abstract—Mobile Edge Cloud Computing (MECC) has becoming an attractive solution for augmenting the computing and storage capacity of Mobile Devices (MDs) by exploiting the available resources at the network edge. In this work, we consider computation offloading at the mobile edge cloud that is composed of a set of Wireless Devices (WDs), and each WD has an energy harvesting equipment to collect renewable energy from the environment. Moreover, multiple MDs intend to offload their tasks to the mobile edge cloud simultaneously. We first formulate the multi-user multi-task computation offloading problem for green MECC, and use Lyaponuv Optimization Approach to determine the energy harvesting policy: how much energy to be harvested at each WD; and the task offloading schedule: the set of computation offloading requests to be admitted into the mobile edge cloud, the set of WDs assigned to each admitted offloading request, and how much workload to be processed at the assigned WDs. We then prove that the task offloading scheduling problem is NP-hard, and introduce centralized and distributed Greedy Maximal Scheduling algorithms to resolve the problem efficiently. Performance bounds of the proposed schemes are also discussed. Extensive evaluations are conducted to test the performance of the proposed algorithms. Index Terms—Mobile edge computing, computation offloading, energy harvesting, multi-user mult-task scheduling Ç 1 INTRODUCTION N OWADAYS, Mobile Devices (MDs) such as smart- phones, tablet computers, wearable devices and etc., are playing more and more important role in our daily life. Moreover, mobile applications running on MDs, such as immersive gaming, human-computer interaction, often demand stringent delay and processing requirements. Con- sequently, despite the tremendous improvement of MDs’ processing capacity, executing applications solely on a MD is still incapable of providing satisfactory Quality of Experi- ence (QoE) to users. Besides, MDs are energy constrained, which further increases the tension between resource lim- ited MDs and computation intensive mobile applications. Fortunately, with the advanced networking and multi- task technology, offloading part of the workload of an appli- cation (or simply put it as a task) from MDs to a remote cloud or a nearby mobile edge cloud [1] (also known as cloudlet) provides a promising alternative to reduce the task execution time as well as prolong the lifetime of MDs. A mobile edge cloud consists of either one edge server or a set of devices that serve mobile users cooperatively. Though the mobile edge cloud is less powerful compared with a remote cloud, as it is located at the edge of the network the transmission latency between a MD and the mobile edge cloud is much lower than that of the remote cloud. Besides, a mobile edge cloud can explore the idle computing and storage resources at the network edge efficiently. In this work, we mainly focus on task offloading in a mobile edge cloud composed of multiple devices. Moreover, we focus on discuss offloading computation intensive and delay sen- sitive tasks to a mobile edge cloud, in which the amount of energy consumed for communication (i.e., transmitting a task and its result between a mobile device and the mobile edge cloud) is way much lower than the energy required for computing the task in a limited time duration. Extensive research attention has been drawn to resource allocation for task offloading in both academia and industry [2]. In single user task offloading scenario, the CPU frequency at the local MDs or devices in the mobile edge cloud, and the amount of workload to be offloaded to each device in the mobile edge cloud are investigated in [3], [4], [5], [6], [7], [8], [9], [10]. In regards to multi-user multi-task computation off- loading, centralized algorithms are proposed in [11], [12], [13], [14], [15], [16], [17], [18] to balance workload from differ- ent mobile edge clouds or virtual machines in the same cloud server. However, to further enhance the agility of the mobile edge cloud, especially when the system consists of a set of distributed Wireless Devices (WDs), online distributed algo- rithms is more preferred. Furthermore, though task offloading can save energy for mobile devices significantly, the mobile edge cloud is still challenged with high energy consumption and carbon foot- print. As a result, many mobile edge clouds, as has been dis- cussed in [2] and [19] use renewable energy to realize green computing. Likewise, when mobile edge cloud is composed of a set of WDs, renewable energy opens up the possibility of convenient and perpetual operation at wireless devices W. Chen and D. Wang are with the College of Computer Science and Elec- tronic Engineering, Hunan University, Changsha, Hunan 410006, China. E-mail: [email protected], [email protected]. K. Li is with the College of Computer Science and Electronic Engineering, Hunan University, Changsha, Hunan 410006, China also with the Department of Computer Science, State University of New York, New Paltz, New York, NY 12561. E-mail: [email protected]. Manuscript received 28 Nov. 2017; revised 27 Mar. 2018; accepted 31 Mar. 2018. Date of publication 13 Apr. 2018; date of current version 9 Oct. 2019. (Corresponding author: Weiwei Chen.) For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TSC.2018.2826544 726 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019 1939-1374 ß 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See ht_tp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

Multi-User Multi-Task Computation Offloadingin Green Mobile Edge Cloud Computing

Weiwei Chen , Dong Wang, and Keqin Li , Fellow, IEEE

Abstract—Mobile Edge Cloud Computing (MECC) has becoming an attractive solution for augmenting the computing and storage

capacity of Mobile Devices (MDs) by exploiting the available resources at the network edge. In this work, we consider computation

offloading at the mobile edge cloud that is composed of a set of Wireless Devices (WDs), and each WD has an energy harvesting

equipment to collect renewable energy from the environment. Moreover, multiple MDs intend to offload their tasks to the mobile edge

cloud simultaneously. We first formulate the multi-user multi-task computation offloading problem for green MECC, and use Lyaponuv

Optimization Approach to determine the energy harvesting policy: how much energy to be harvested at each WD; and the task

offloading schedule: the set of computation offloading requests to be admitted into the mobile edge cloud, the set of WDs assigned to

each admitted offloading request, and how much workload to be processed at the assigned WDs. We then prove that the task

offloading scheduling problem is NP-hard, and introduce centralized and distributed Greedy Maximal Scheduling algorithms to resolve

the problem efficiently. Performance bounds of the proposed schemes are also discussed. Extensive evaluations are conducted to test

the performance of the proposed algorithms.

Index Terms—Mobile edge computing, computation offloading, energy harvesting, multi-user mult-task scheduling

Ç

1 INTRODUCTION

NOWADAYS, Mobile Devices (MDs) such as smart-phones, tablet computers, wearable devices and etc.,

are playing more and more important role in our daily life.Moreover, mobile applications running on MDs, such asimmersive gaming, human-computer interaction, oftendemand stringent delay and processing requirements. Con-sequently, despite the tremendous improvement of MDs’processing capacity, executing applications solely on a MDis still incapable of providing satisfactory Quality of Experi-ence (QoE) to users. Besides, MDs are energy constrained,which further increases the tension between resource lim-ited MDs and computation intensive mobile applications.

Fortunately, with the advanced networking and multi-task technology, offloading part of the workload of an appli-cation (or simply put it as a task) from MDs to a remotecloud or a nearby mobile edge cloud [1] (also known ascloudlet) provides a promising alternative to reduce thetask execution time as well as prolong the lifetime of MDs.A mobile edge cloud consists of either one edge server or aset of devices that serve mobile users cooperatively. Thoughthe mobile edge cloud is less powerful compared with aremote cloud, as it is located at the edge of the network the

transmission latency between a MD and the mobile edgecloud is much lower than that of the remote cloud. Besides,a mobile edge cloud can explore the idle computing andstorage resources at the network edge efficiently. In thiswork, we mainly focus on task offloading in a mobile edgecloud composed of multiple devices. Moreover, we focuson discuss offloading computation intensive and delay sen-sitive tasks to a mobile edge cloud, in which the amount ofenergy consumed for communication (i.e., transmitting atask and its result between a mobile device and the mobileedge cloud) is way much lower than the energy required forcomputing the task in a limited time duration.

Extensive research attention has been drawn to resourceallocation for task offloading in both academia and industry[2]. In single user task offloading scenario, the CPU frequencyat the local MDs or devices in the mobile edge cloud, and theamount of workload to be offloaded to each device in themobile edge cloud are investigated in [3], [4], [5], [6], [7], [8],[9], [10]. In regards to multi-user multi-task computation off-loading, centralized algorithms are proposed in [11], [12],[13], [14], [15], [16], [17], [18] to balance workload from differ-ent mobile edge clouds or virtual machines in the same cloudserver. However, to further enhance the agility of the mobileedge cloud, especially when the system consists of a set ofdistributed Wireless Devices (WDs), online distributed algo-rithms ismore preferred.

Furthermore, though task offloading can save energy formobile devices significantly, the mobile edge cloud is stillchallenged with high energy consumption and carbon foot-print. As a result, many mobile edge clouds, as has been dis-cussed in [2] and [19] use renewable energy to realize greencomputing. Likewise, when mobile edge cloud is composedof a set of WDs, renewable energy opens up the possibilityof convenient and perpetual operation at wireless devices

� W. Chen and D. Wang are with the College of Computer Science and Elec-tronic Engineering, Hunan University, Changsha, Hunan 410006, China.E-mail: [email protected], [email protected].

� K. Li is with the College of Computer Science and Electronic Engineering,Hunan University, Changsha, Hunan 410006, China also with theDepartment of Computer Science, State University of New York, NewPaltz, New York, NY 12561. E-mail: [email protected].

Manuscript received 28 Nov. 2017; revised 27 Mar. 2018; accepted 31 Mar.2018. Date of publication 13 Apr. 2018; date of current version 9 Oct. 2019.(Corresponding author: Weiwei Chen.)For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TSC.2018.2826544

726 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019

1939-1374� 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See ht _tp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

even without connecting them to the power grid. Renew-able energy can be generated in various sources, such assolar radiation, wind and in-door light. Though renewableenergy is environmental friendly, the amount of harvestableenergy is time varying. Consequently, the energy harvestingpolicy and the task offloading policy together will deter-mine the efficiency of a green mobile edge cloud, i.e., theamount of workload to be completed, the amount of energysaved by the MDs, and etc.

Despite the efforts made in the previous works, in regardsto a green mobile edge cloud that composed of a set of wire-less devices, since the workload from a mobile device can becomputed by multiple wireless devices, how to assign work-load fromMDs toWDs with guaranteed system performancestill remains to be an open issue. Moreover, to facilitate easysystem implementation, how to make sure all the MDs andWDs adjust to both energy and task demand variations dis-tributively and efficiently is even challenging. In this work,we strive to tackle these issues. In detail, the design goal is toexploit the computing capacity in green mobile edge cloudefficiently, e.g., finish asmany offloading requests as possible,and make the best use of the harvestable energy at each WDin the mobile edge cloud. To this end, the energy harvestingpolicy and the offloading scheduling scheme are carefullyplanned so that the amount of energy harvested at the mobileedge cloud and the amount of energy consumed for task off-loading are well balanced. In addition, to meet differentimplementation requirements, both energy harvesting andscheduling decisions can be determined either in a centralizedor in a distributed way. The main contributions of this workare listed in the following.

(1) We propose a multi-user multi-task computation off-loading framework for a green mobile edge cloudcomputing system. In the proposed approach, thedynamics of energy arrivals at the mobile edge cloudas well as task arrivals at different MDs are jointlyconsidered;

(2) We introduce the concept of energy links (not realwireless links) and show that offloading a task froma MD to mobile edge cloud is equivalent to routingthe harvested energy from mobile edge cloud to aMD. With energy links, the multi-user multi-task off-loading problem is then cast into a Maximal Inde-pendent Set (MIS) problem; and

(3) Since the problem is NP-hard, centralized and dis-tributed greedy maximal scheduling algorithms andtheir performance bound are studied. Both theoreti-cal analysis and simulation results indicate the pro-posed centralized and distributed greedy schedulingalgorithms achieve similar performance. Moreover,the proposed scheduling algorithms provide 18.8 to31.9 percent higher average system utility than a ran-dom scheduling scheme.

The rest of the paper is organized as follows. Prior worksare reviewed in Section 2. The system model is discussed inSection 3. Section 4 formulates the multi-user multi-task com-putation offloading problem. Since the problem is NP-hard, acentralized and a distributed greedy maximal schedulingalgorithms are presented. Section 5 analyzes the implementa-tion complexity as well as the performance bound of the

proposed scheduling algorithms. Simulation results areinvestigated in Section 6. In Section 7, a conclusion is drawn.

2 RELATED WORKS

In this section, we will give a brief overview about somerelated works in regards to resource allocation for task off-loading in mobile edge cloud computing. Task offloading inmobile edge cloud computing can be supported by eitherpower grid or renewable energy. When the mobile edgecloud has reliable energy supply from the power grid, inte-ger programming is proposed in [5] to determine whichpart of the task is offloaded to the mobile edge cloud andwhich part should be executed at the MD for single task off-loading scenario. The prior work [20] applies game theoryto coordinate task offloading from various users with vari-ous task requests(task length, task response delay require-ment, etc.) to the mobile edge cloud. When the mobile edgecloud is consisted of a set of devices that will compute taskssimultaneously for a mobile device [21], and [7] propose tooffload a task from a MD to multiple MDs through a singlehop or multiple hops, so as to minimize the overall taskresponse time. To reduce the transmission delay caused bymulti-hop task dissemination, the prior work [6] offloads atask to multiple MDs that are directly connected to the taskinitiator. Semi-definite Cone Programming is discussed in[10] to facilitate task offloading from a MD to a mobile edgecloud that contains multiple WDs.

For multi-task offloading problem, the authors in [4], [14]introduce heuristic algorithms to assign sub-tasks to differentcores or Virtual Machines (VMs) in a cloud server. Chen et al.[16] propose a game theoretical approach to let multiple MDsto decide whether they should offload their tasks to themobile edge cloud or not. As the computing capacity of theserver in the mobile edge cloud is assumed to be extremelyhigh, the bottleneck for offloading tasks from multiple usersis the bandwidth the network can provide. Guo et al. [11]approximate the joint power allocation and sub-task assign-ment problem with a convex optimization approach. Byassuming almost infinite computing capability at the mobileedge cloud server, it mainly concerns which subset of sub-tasks should be offloaded to the mobile edge cloud server,and their offloading sequence. However, these works cannotbe applied to the case when the computing capacity of theservers in themobile edge cloud are limited.

In the context of multiple users and multiple mobile edgeclouds/servers/VMs in a cloud, centralized resource alloca-tion strategies [15], [22] are developed to balance the work-load of each mobile edge cloud/VM in a cloud. Prior work[12] assumes a set of MDs form a mobile edge cloud andhelp to execute tasks from each other so as to minimize theaverage energy consumption of all MDs. To encourage thecooperation among different MDs, incentive schemes areintroduced to prevent one MD from overriding the energy,bandwidth or computing resources from other MDs. The adhoc mobile edge cloud setting is also discussed in [13]. InChen’s work, graph theory is used to pair different D2Dmobile devices so as to minimize the overall system energyconsumption. Multi-users offloading tasks to multi-serversin the mobile edge cloud computing is discussed in [23].Even though, in [12], [13], [23], one task is assumed to be off-loaded to only one MD or server in the mobile edge cloud.

CHEN ET AL.: MULTI-USER MULTI-TASK COMPUTATION OFFLOADING IN GREEN MOBILE EDGE CLOUD COMPUTING 727

Page 3: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

This assumption, however, is not applicable to the casewhen a task is relatively large and has to be processed bymultiple devices in the mobile edge cloud. Opportunisticoffloading strategy is discussed in [3], a MD can dynami-cally determine whether it offloads its task to its nearbyMDs via a 3G/4G, WIFI or D2D link based on the MD’slocation. Despite the efforts made in the previous works,in regards to a self-organized mobile edge cloud with aset of MDs, an efficient distributed resource allocationalgorithm is more preferred to a centralized scheme so asto facilitate more agile task offloading in mobile edgecloud computing.

On the other hand, energy harvesting technology hasbeen studied recently to reduce the carbon footprint. More-over, if wireless devices are powered by renewable energyharvested from the environment, they can be operated per-petually even without energy supply from the power grid.When energy harvesting is blended with mobile edge cloudcomputing, it is expected to use the network edge comput-ing resources more efficiently. You et al. [8] assume wirelessenergy transfer technology is used for a MD such as a wire-less sensor or a wearable device. A binary offloading deci-sion is made every time when a task arrives to the mobiledevice based on whether offloading the task to the mobileedge cloud will save its energy or not. Joint transmit energybeam-former at AP, CPU processing frequency as well astask offloading strategies for multi-users is presented in[24]. Moreover, in this work the mobile users are chargedby its associated AP via Wireless Power Transfer (WPT). In[9], the authors assume a MD is able to harvest energy fromthe environment. Since the harvestable energy at the MD ishighly dynamic, it then uses Lyapunov optimizationapproach to design the task offloading strategy so as tomaximize the time average utility of the MD. The priorwork [8] and [9] only consider resource allocation optimiza-tion for one MD. As to the multi-user case, to handle theuncertainty in the amount of harvestable energy, theauthors in [19] propose a reinforcement learning basedresource allocation algorithm to dynamically offload work-load to different servers in the mobile edge cloud so that thelong-term service delay and system operation cost can beminimized.

The approaches mentioned above are more suitable for acentralized system that can monitor all the nodes in the net-work and make system-wide decisions jointly. Beside, theyonly focus on task offloading from users to a single mobileedge cloud server. When a green mobile edge cloud is com-posed of a set of wireless devices, how to implementresource allocation strategies for multi-user multi-task off-loading efficiently, especially when multiple devices cancompute workload from a single user, still remains to be anopen issue. In this work, we will design efficient schedulingalgorithms to address this problem.

3 SYSTEM MODEL

The mobile edge cloud consists of a set of WDs (denoted asW ) and a set of MDs (denoted as M) where jW j and jMj isthe cardinality of W and M respectively. The WDs, such asWi-Fi routers, femto-cell base stations, printers etc., areequipped with energy harvesting devices so as to harvest

energy from the environment and provide computation off-loading services to the MDs. MDs, via buying the offloadingservices, can offload their tasks to the mobile edge cloud toextend their own computing ability and reduce their energyconsumption. Moreover, a MD can only offload its task toWDs that are within direct communication range of itself. AWD, though can accept tasks from different MDs at differ-ent time, is only capable of computing workload from oneMD at a time. In addition, different wireless links useorthogonal channels (such as in LTE [25] or IEEE 802.11axsystem [26]) for data transmission so that one wireless linkwill not experience interference from other wireless links.Fig. 1 illustrates a typical computation offloading scenarioin which a set of wireless APs constitute a mobile edgecloud. In this figure, MD1, MD2 and MD3 can establish adirect communication link to AP1, AP2, AP3 and AP4. Theworkload from MD1, MD2 and MD3 are denoted as WK1,WK2 and WK3 respectively. Based on MDs’ location as wellas the amount of workload to be offloaded at each MD, wecan schedule task offloading in the following way. AP1 andAP2 serve MD1, AP3 serves MD2, and AP4 serves MD3.

Furthermore, time is divided into frames, where a framecontains a control sub-frame and a computation offloadingsub-frame. In the control sub-frame, control information isexchanged among WDs and MDs to determine the offload-ing schedule. In the computation offloading sub-frame,workload is first transmitted to the WDs, and WDs returnthe result to the MDs after they finishing processing theworkload. Here workload refers to the amount of sub-taskfrom a MD.

A task to be offloaded to the mobile edge cloud from MDi can be represented as a tuple AiðtÞ ¼ riðtÞ;biðtÞ; riðtÞ;fuiðtÞg, and riðtÞ;biðtÞ; riðtÞ; uiðtÞ denotes the input data size,the ratio of the output data size to the input data size, theratio of the number of CPU cycles to the input data size,and the revenue the mobile edge cloud can get if the taskhas been successfully completed by the mobile edge cloud.Moreover, AiðtÞ ¼ 0; 0; 0; 0f g corresponds to the case whenno task offloading is requested by MD i at frame t. The tasksto be offloaded are fine grained and can be divided intoarbitrary size. As a certain amount of communication over-head is always needed to facilitate task offloading betweena MD and a WD, to guarantee the task offloading efficiencyit is required that the amount of workload offloaded to aWD is no less than g � riðtÞ if a task AiðtÞ is to be offloadedto the mobile edge cloud. Furthermore, a task has to be

Fig. 1. An illustrative scenario for computation offloading in mobile edgecloud computing. In this figure, the mobile edge cloud is composed of aset of wireless APs.

728 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019

Page 4: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

finished in a frame as long as it is admitted into the mobileedge cloud.

One WD can only compute workload from one MD at atime. However, one WD can serve different MDs at differenttime. On the other hand, multiple WDs can compute tasksfor one MD cooperatively. Denote sijðtÞ as an indicator.sijðtÞ ¼ 1 if WD j serves MD i at frame t and sijðtÞ ¼ 0 other-wise. Let Mj be the set of MDs connected directly to WD j.Hence,

Xi2Mj

sijðtÞ � 1; 8j 2 W: (1)

Let �siðtÞ be a binary number, where �siðtÞ ¼ 1 indicatesthat a task initiated by MD i at frame t is accepted by aset of WDs in the mobile edge cloud, and �siðtÞ ¼ 0 impliesthat the task from MD i is not accepted. Denote Wi as theset of WDs connected directly to MD i, and xijðtÞ as theamount of workload scheduled to WD j from MD i. Tomeet the task offloading feasibility constraint, it requires

Xj2Wi

xijðtÞsijðtÞ ¼ riðtÞ�siðtÞ; 8i 2 M; (2a)

xijðtÞ � gsijðtÞriðtÞ; 8i 2 M; j 2 Wi: (2b)

Let the CPU frequency of WD j be fj, and cij be the datarate between MD i and WD j. Due to the delay constraint,the total amount of time for offloading workload xij, i.e.,the time used for transmitting the workload from MD i toWD j, computing the workload at WD j and returning theresult to MD i, should be less or equal to the duration ofthe computation offloading sub-frame (denoted as pt;o).Hence,

xijðtÞ biðtÞcij

þ 1

cijþ riðtÞ

fj

� �sijðtÞ

¼ xijðtÞwijðtÞsijðtÞ � pt;o 8j 2 Wi:

(3)

Here wijðtÞ ¼ 1cijbiðtÞ þ 1

cijþ riðtÞ

fj. wijðtÞ is the total amount of

time WD j needs to finish 1 bit of workload initiated by MDi at frame t.

Similarly, WD j will consume energy for receiving andcomputing the workload, as well as returning its result tothe task initiator. Let the transmitting and receiving powerat WD j be ptj and prj respectively. Denote the amount ofenergy consumed at WD j for MD i at frame t as eijðtÞ. eijðtÞcan be expressed as

eijðtÞ ¼ ptjbiðtÞcij

þ prjcij

þ kfsj riðtÞ

� �xijðtÞsijðtÞ

¼ xijðtÞzijðtÞsijðtÞ 8i 2 M; j 2 Mi:

(4)

where k is the effective switched capacitance, s is a valueclose to 2 (we set it to 2 in this work), and zijðtÞ ¼ptjbiðtÞ

cijþ prj

cijþ kfs

j riðtÞ. The overall amount of energy con-

sumed by WD j, denoted as ejðtÞ, can be expressed as

ejðtÞ ¼Xi2Mj

eijðtÞ; 8j 2 W: (5)

In addition, the amount of harvestable energy in theenvironment arrives to a WD randomly. Let �hjðtÞ and hjðtÞbe the maximum amount of harvestable energy arrives toWD j and the actual amount of energy WD j harvests atframe t. Obviously, �hjðtÞ � hjðtÞ. WD j maintains an energyqueue for computation offloading. The energy harvestedfrom the previous frame can be used in the current frame.Denote the energy queue for WD j at frame t as qjðtÞ. qjðtÞevolves as

qjðtþ 1Þ ¼ qjðtÞ þ hjðtÞ � ejðtÞ; 8j 2 W: (6)

Note that, as the energy consumption ofMD j is nomore thanthe available energy in its energy queue, it requires that

ejðtÞ � qjðtÞ; 8j 2 W: (7)

The notations used in this article are listed in Table 1.

TABLE 1Notations

M The set of MDs (Mobile Devices).N The set of WDs (Wireless Devices).Mj The set of MDs connected with WD j.Wi The set of WDs connected with MD i.riðtÞ The input data size of the task fromMD i at frame t.

biðtÞ The ratio of the output to the input data size of thetask fromMD i at frame t.

pt;o The duration of a computation offloading sub-frame.

riðtÞ The ratio of the CPU cycles to the input data size ofthe task fromMD i at framt t.

uiðtÞ The revenue of successfully offloading the task fromMD i.

wijðtÞ The amount of time WD j needs to finish onebit of workload fromMD i.

zijðtÞ The total amount of energy WD j needs to processone bit of workload fromMD i.

�siðtÞ �siðtÞ ¼ 1 indicates a task fromMD i is accepted bya set of WDs in the system, otherwise �siðtÞ ¼ 0.

sij sij ¼ 1 if WD j serves MD i at frame t,otherwise sijðtÞ ¼ 0.

xijðtÞ The amount of workload assigned to WD j fromMD i at frame t.

ejðtÞ The amount of energy consumed by WD j at frame t.qjðtÞ The energy queue for WD j at frame t .hjðtÞ The amount of energy harvested by WD j at frame t.

�hjðtÞ The maximum amount of harvestable energy at WD jin frame t.

uj Energy drift at WD j.

v0 Aweight, used to balance the energy queue lengthand the system utility (revenue).

ml The task initiator of energy link l.

RlðtÞ The set of WDs assigned to MD ml in energy link l atframe t.

blðtÞ The weight of energy link l at frame t.ELðtÞ The set of all the energy links at frame t.IsðtÞ The maximum independent energy link set at frame t.LSðtÞ The set of all independent energy link sets at frame t.dmax The maximum conflict degree of all possible ELðtÞh The number of iterations in the proposed DGMS

based algorithm.UCGMSv0

The overall system utility of the CGMS-based scheme.

UDGMSv0

The overall system utility of the DGMS-based scheme.

CHEN ET AL.: MULTI-USER MULTI-TASK COMPUTATION OFFLOADING IN GREEN MOBILE EDGE CLOUD COMPUTING 729

Page 5: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

4 THE PROPOSED CENTRALIZED AND DISTRIBUTED

MULTI-USER MULTI-TASK OFFLOADING

SCHEMES

4.1 Formulation

In this work, we intend to maximize the overall revenue ofthe WDs. Hence, the multi-user multi-task offloading prob-lem can be formulated as:

max limts!1

1

ts

Xtst¼1

Xi2M

uiðtÞ�siðtÞ (8a)

s.t.ð1Þ � ð7Þ: (8b)

Notice that, our approach can be easily extended toachieve other design goals, e.g., to minimize the energy con-sumption of all the MDs, etc.

According to Equation (8), if we can exhaustively list allpossible task arrivals of all the MDs and the energy arrivalof all the WDs, we can design a specific multi-user multi-task offloading scheme for each combination of task andenergy arrivals, the optimal system revenue (the optimalvalue of Equation (8)) will be achieved. However, as thetask and energy arrivals are time variant, the total numberof task and energy arrivals for all of the users in the systemgrows exponentially with the network size. For instance, ifthere are y different levels of harvestable energy at eachWD, � different types of tasks, m MDs and n WDs, the totalnumber of combinations is ynð� þ 1Þm.

To this end, we use the Lyapunov optimization approachproposed in [27] to resolve Equation (8) without exhaus-tively listing all the combinations. In detail, let uj be a con-stant, and it is named as the energy drift of energy queueqjðtÞ. Here,

uj ¼ emaxj þ fðv0Þ;

emaxj ¼ max

1�t�ts;i2Wjf gzijpt;o

wij

� �;

hmaxj ¼ max

1�t�tsf g�hjðtÞ;

hmax ¼ maxj2Wf g

�hmaxj ;

zmin ¼ mini2M;j2Wi;1�t�tsf g

zijðtÞ;

zmax ¼ maxi2M;j2Wi;1�t�tsf g

zijðtÞ;

rmin ¼ min1�t�ts;i2Mf g

riðtÞ; and

fðv0Þ ¼ rminð1� gÞhmaxzmax þ v0uk

grminzmin:

Notice that, emaxj represents the maximum amount of energy

wireless device j consumed for computing the workload

from any of its neighboring mobile devicesWj. Hence it takes

the maximum value ofzijpt;owij

among all mobile device i 2 Wj.

Moreover, the energy drift uj, a function of v0, forcesthe energy queue to stay around uj. The higher the v0, themore energy buffered at the energy queue, the more available

energy to be used. Let Q0ðtÞ ¼ fq0jðtÞ ¼ qjðtÞ � uj j j 2 Wgbe the set of virtual queues for all the WDs in W . Define LðtÞandDðtÞ as

LðtÞ ¼Xj2W

ðq0jðtÞÞ2; (9a)

DðtÞ ¼ E Lðtþ 1Þ � LðtÞ jQ0ðtÞf g: (9b)

We have,

DðtÞ � v0EXi2M

uiðtÞ�siðtÞ jQ0ðtÞ( )

� �Xi2M

v0E uiðtÞ�siðtÞ jQ0ðtÞf g

þ EXj2W

�qjðtÞ � uj

��hjðtÞ �

Xi2Mj

eijðtÞ�jQ0ðtÞ

8<:

9=;þ d:

(10)

Here d ¼ 12

Pj2W ðhmax

j Þ2 þ 12

Pj2W ðemax

j Þ2. To maintain sys-

tem stability, Lyaponuv optimization approach aims at

finding an energy harvesting strategy hjðtÞ� �

and a task off-

loading strategy �siðtÞ; sijðtÞ; xijðtÞ� �

so as to minimize the

right-hand side of Equation (10). Therefore, for the given

virtual energy queues Q0ðtÞ, �siðtÞ; sijðtÞ; xijðtÞ; hjðtÞ� �

can bedetermined by

arg infð1Þ�ð5Þ;ð7Þ

�Xj2M

ðqjðtÞ � ujðtÞÞXi2Mj

xijðtÞzijðtÞ8<:

þXj2W

ðqjðtÞ � ujÞhjðtÞ �Xi2M

v0uiðtÞ�siðtÞ):

(11)

Equation (11) is then decoupled with the following twoseparate sub-problems.

1. Energy harvesting strategy

hjðtÞ ¼h0jðtÞ; qjðtÞ < ujðtÞ;

0; otherwise:

�(12)

2. Multi-task multi-user task offloading strategy

�siðtÞ; sijðtÞ; xijðtÞ� � ¼ arg max

ð1Þ�ð5Þ;ð7Þ

Xi2M

�v0uiðtÞ�siðtÞ

þXj2Wi

ðqjðtÞ � ujÞzijðtÞxijðtÞ�:

(13)

Before discussing how to solve Equation (13), we fistintroduce Lemma 1 which guarantees a WD can always pro-vide enough energy for computing and transmitting theworkload from a MD once the WD is recruited for taskoffloading.

Lemma 1. 8i 2 Wj, if ejðtÞ < emaxj ; xijðtÞ ¼ 0, and WD j will

not participate in task computing in frame t.

Proof. The proof is provided in Appendix A, which can befound on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TSC.2018.2826544. tuRecall that emax

j is the maximum amount of energy WD jconsumes for computing workload from a MD.

730 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019

Page 6: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

Lemma 1 implies that, for any mobile device i that is theneighborhood of wireless device j ði 2 WjÞ, if its currentavailable energy ejðtÞ is smaller than emax

j , mobile device icannot get any service from wireless device j (xijðtÞ ¼ 0).With Lemma 1, constraint Equation (7) can be safelyremoved from Equation (13). Moreover, as will be shown inAppendix B, available in the online supplemental material,by removing Equation (7), we can easily derive a lowerbound of the scheduling schemes proposed in the followingsub-sections.

Lemma 2. The feasibility of the given �siðtÞ j i 2 Mf g, andsijðtÞ j i 2 M; j 2 Wi

� �can be verified with the following set

of equations:

Xj2Wi

pt;o

wijðtÞ sijðtÞ � riðtÞ�siðtÞ; (14a)

Xi2Mj

sijðtÞ � 1; 8j 2 W ; (14b)

Xj2Wi

sij

!� g � 1; (14c)

qjðtÞ � emaxj sijðtÞ; 8j 2 W: (14d)

Moreover, if �siðtÞ j i 2 Mf g, and sijðtÞ j i 2 M; j 2 Wi

� �is feasible, the associated optimal workload assignment xijðtÞcan be derived from Equation (15).

x�ijðtÞ ¼

pt;owijðtÞ ; i 2 T 0

i;1;k� ;

yn�;k� ðtÞ �X

j2T 0i;1;k� ðtÞ

pt;o

wijðtÞ i ¼ tk�þ1;

riðtÞg; i 2 Ti;k�þ2;n�þ1;

0; otherwise.

8>>>>>>><>>>>>>>:

(15)

tjðtÞ is the WD whose ðqjðtÞ � ujÞzijðtÞ is the jth largest, and

n� ¼ arg n jX

j2T 0i;1;n

1

wijðtÞ <riðtÞpt;o

�X

j2T 0i;1;nþ1

1

wijðtÞ

8><>:

9>=>;;

yn�;kðtÞ ¼ riðtÞ�1� ðn� � k�Þg

�;

k� ¼ arg kjyn�;k�2ðtÞ �X

j2T 0i;1;k

pt;o

wijðtÞ < yn�;kðtÞ

8><>:

9>=>;;

T 0i;k;n ¼ tkðtÞ; . . . ; tnðtÞf g; 1 � k � n � jWiðtÞj:

Proof. Equations (14a)-(14b) guarantee that there is enoughWDs to process the workload from MD i, and each WDonly process workload from one MD. Moreover,Equation (14c) implies a WD will either compute no lessthan griðtÞ amount of workload for MD i, or not computeany workload from it. Equation (14d) makes sure theMDs that will participate into task offloading have morethan emax

j amount of energy. Hence if a schedule satisfiesEquation (14a), (14b), (14c), (14d), it is a feasible schedule.

As to Equation (15), it is a straight derivation of Equa-tion (13) when �siðtÞ j i 2 Mf g and sijðtÞ j i 2 M;

�j 2 Wig

are given. tuLemma 2 indicates that MDs with higher virtual queue

length ðqjðtÞ � ujðtÞÞ have higher chance to be assigned withmore workload.

We further associate a weight bð�siðtÞ; fsijðtÞgÞ to the

given �siðtÞ, and fsijðtÞg. bð�siðtÞ; fsijðtÞgÞ is expressed as

b

��siðtÞ; sijðtÞ

� �� ¼ v0uiðtÞ þXj2Wi

�qjðtÞ � uj

�zijðtÞx�

ijðtÞ:

(16)

where x�ijðtÞ is the solution derived from Equation (15).

Next, we introduce the concept of the energy link whichis then used to search �siðtÞf g and sijðtÞ

� �efficiently.

An Energy link is represented as mlðtÞ; RlðtÞf g, where ml

means energy link l is used to offload the task initiated byMD ml, and Rl is the set of WDs compute the task coopera-tively. Since multiple energy links may or may not have thesame task initiator and the same set of wireless devices com-pute the job for the task initiator, only mlðtÞ or RlðtÞ is notable to identify an energy link. Hence the combinationðmlðtÞ; RlðtÞ is introduced to represent an energy link. Tomeet the task feasibility constraints (2-3), Rl is selected as

RlðtÞ ¼�W 0

i jW 0i � Wi; i ¼ mlðtÞ;

Xj2W 0

i

zijðtÞwijðtÞ �

riðtÞpt;o

; jW 0i j � g � 1

�:

(17)

Note that, an energy link is used to offload a task from aMD to a set of WDs, and it is not a real wireless link. Foreach energy link l, we also associate a weight (denoted asblðtÞ) to it and blðtÞ is set according to Equation (16). More-over, energy link l1 and l2 cannot be activated simulta-neously, or l1 and l2 “interfere” with each other when

1) Two energy links share the same task initiator:ml1

¼ ml2;

2) There exists at least one WD that belongs to bothRl1ðtÞ and Rl2ðtÞ, that is Rl1ðtÞ \Rl2ðtÞ 6¼ F.

Let the set of energy links be EL. We can then establish

an interference graph IG with IG ¼¼ gl1;l2 j l1; l2 2 EL� �

.

Here gl1;l2 ¼ 1 if l1 interferes with l2. With IG, a maximumindependent energy link set, denoted as Is, should satisfythe following constraints:

1) 8l1; l2 2 Is; gl1;l2 ¼ 0;2) 8l3 =2 Is; 9l1 2 Is, such that gl1;l3 ¼ 1.We also define NlðtÞ as the set of energy links interfere

with energy link l (NlðtÞ ¼ l1 j gl;l1 ¼ 1� �

), and $max as the

maximum degree of NlðtÞ. $max ¼ $max ¼ maxl2ELjNlðtÞj.Here jNlðtÞj is the cardinality of NlðtÞ. Note that, with theconcept of energy link, task offloading can be viewed asrouting energy from the mobile edge cloud (WDs) to a MDthrough an energy link. Hence to maximize the overall reve-nue of the mobile edge cloud is equivalent to route the har-vested energy from the mobile edge cloud to MDs asefficiently as possible. Designate LS as the set of all possible

CHEN ET AL.: MULTI-USER MULTI-TASK COMPUTATION OFFLOADING IN GREEN MOBILE EDGE CLOUD COMPUTING 731

Page 7: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

Is, and f�s�i ðtÞ; s�i;jðtÞg as the optimal solution of Equation (13).f�s�i ðtÞ; s�i;jðtÞg is determined by the following equation:

�siðtÞ; sijðtÞ� � ¼ arg max

Is2LS

Xl2Is

blðtÞ: (18)

With (18), we can easily derive the following theorem.

Theorem 1. The multi-user multi-task offloading problem Equa-tion (13) is NP-hard.

Proof. (13) can be reduced to Equation (18), and Equa-tion (18) is a maximum independent set problem, whichis NP-hard as has been discussed in [28]. Consequently,Equation (13) is also NP-hard. tuWhen the network size is small, the optimal schedule

�siðtÞ; sijðtÞ� �

can be derived via exhaustively searching allpossible maximum independent set (Is). However, the asso-ciated computation complexity grows exponentially withthe network size. Hence on-line heuristic algorithms areneeded to find a good Is efficiently.

4.2 The Proposed Centralized Multi-User Multi-TaskOffloading Scheme

In this sub-section, we will discuss how to determine themulti-user multi-task offloading scheme with a centralizedapproach. We use a Greedy Maximal Scheduling (GMS) toderive a schedule for (18), the details of the GMS-basedmulti-user multi-task offloading scheduling algorithm isshown in Algorithm 1. Algorithm 1 uses either the central-ized GMS algorithm (abbreviated as CGMS) or the Distrib-uted GMS algorithm (abbreviated as DGMS) to search afeasible schedule.

Algorithm 1. The Multi-user Multi-task OffloadingAlgorithm

Data: �hjðtÞ� �

Initialization: 8j 2 W; qjð0Þ ! 0;foreach frame t do

foreach j 2 W doHarvest energy according to Equation (12) andbroadcast its available energy to its associated MDs;

endUse CGMS or DGMS algorithm to search �siðtÞf g; sijðtÞ

� �and xijðtÞ

� �;

Schedule the admitted task offloading requests;foreach j 2 W do

Update qjðtÞ according to Equation (6).end

end

In the CGMS algorithm (shown in Algorithm 2), a cen-tralized controller accepts one task offloading request andassigns its workload to WDs greedily so as to maximize thesystem utility in the current iteration. This procedure willbe conducted iteratively until no more request can be admit-ted in the current frame.

4.3 The Proposed Distributed Multi-User Multi-TaskOffloading Scheme

As has been discussed before, a centralized greedy algo-rithm requires a centralized controller to admit and

schedule workload from different MDs jointly. When thenetwork size grows, or when there is no wired connectionsbetween different WDs, to coordinate workload schedulingin a centralized way will incur high system implementationcomplexity. Hence a distributed scheduling algorithm ishighly preferred. In this section, we will propose a distrib-uted multi-user multi-task offloading algorithm. To thisend, we extend the algorithm discussed in [29] to search afeasible schedule for Equation (18). The details of the DGMS(Distributed GMS) based multi-user multi-task offloadingalgorithm is shown in Algorithm 3.

Algorithm 2. The Centralized Greedy Maximal Schedul-ing Algorithm

Data: EL; AiðtÞji 2 Mf gResult: �siðtÞ; sijðtÞ

� �; xijðtÞ� �

Initialization : EL0EL A centralized controller collects the vir-tual queue information q0jðtÞ for all j 2 W ;foreach j 2 W do

if qjðtÞ < emaxj then

EL0 ! EL0 n l j j 2 RlðtÞf gend

endfm ! true;while fm do

l� ! argmaxl2EL0blðtÞ;if bl� ðtÞ > 0 then

�sml�ðtÞ ! 1; sml� ;jðtÞ ! 1;

foreach j 2 Rl� ðtÞ doSet xml� ;jðtÞ according to Equation (15), EL0 ! EL0nl j gl;l� ¼ 1� �

;Send a task offloading confirmation and thecorresponding workload assignment to MD ml�and WDs in Rl� ðtÞ;

endelse

fm ! falseend

end

In the algorithm, we first define h ¼ min $max; nlmaxf g,where nlmax is the maximum number of energy links thatcan be activated simultaneously (nlmax ¼ maxIs2LSjIsj), and$max is the maximum degree of the interference graph IG(refer to the previous section). Moreover, a MD i that needsto offload its task to other WDs, will maintain a WD setW 0

iðtÞ, and choose WDs in W 0i ðtÞ to construct its energy link.

The algorithm iterates h rounds. In each round, a MD iwhose task offloading request has not been responded willchoose an energy link l with the highest weight (blðtÞ)among the remaining available energy links, and broadcasta task offloading request to all the WDs in RlðtÞ. The taskoffloading request message contains the general task infor-mation as well as the amount of workload needs to be off-loaded to all the WDs in RlðtÞ. When a WD receives one ormore task offloading requests, it will choose the one withthe highest weight, and send a task offloading accept signalto that MD. If a MD receives the task offloading accept sig-nal from all the WDs in RlðtÞ, it implies that energy link lhas the highest weight among all the energy links interferewith itself. The MD will then acknowledge all the WDs in

732 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019

Page 8: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

RlðtÞ that its task will be offloaded to them. Upon receivingthe task offloading confirmation signal from MD i, WD j isready to serve MD i. Hence it will inform other MDs con-necting to it with an occupation message. If MD i receivesan occupation message from WD j, it will delete j fromW 0

iðtÞ.

Algorithm 3. The Distributed Greedy Maximal Schedul-ing Algorithm

Data:Mj,Wi; AiðtÞji 2 Mf g, TW ! WResult: �siðtÞ; sijðtÞ

� �; xijðtÞ� �

Initialization :W 0iðtÞ ! Wi n j j qjðtÞ < emax

j

n o;

for subi ¼ 1 to h doforeachMD i 2 i1 j i1 2 M; �si1ðtÞ ¼ 0; ri1ðtÞ > 0

� �do

Sort WDs in W 0iðtÞ in descending order, denote the new

sequence as T 0i;1;jW 0

iðtÞj select the largest ni� ðtÞ þ 1 WDs

such that riðtÞ �P

j2T 0i;1;ni� þ1

pt;owijðtÞ and g � ðn� þ 1Þ � 1;

Construct an energy link l with RlðtÞ ¼ T 0i;1;ni�þ1ðtÞ, and

compute its weight blðtÞ according to (16);Send a task request to MDs in RlðtÞ;

endforeachWD j 2 M n TW do

mj ! argmaxml2MjblðtÞ and respond a task accept

signal to MDmj;endforeachMD i 2 i1 j i1 2 M; �si1ðtÞ ¼ 0; xiðtÞ > 0

� �do

if MD i receives task accept signals from all MDs in RlðtÞthenAcknowledge all MDs in RlðtÞ with a task offloadingconfirmation signal;�siðtÞ ! 1,foreachWD j 2 RlðtÞ do

sijðtÞ ! 1, xijðtÞ is set according to Equation (15);end

endendforeachWD j 2 M n TWf g do

ifWD j receives a task offloading confirmation signalthen

TW ! TW [ j and send an occupation message toi 2 Mj nmj.

endendforeachMD i 2 i1 j i1 2 M; �si1ðtÞ ¼ 0; xiðtÞ > 0

� �do

ifMD i receives an occupation message from WD j thenW 0

i ðtÞ ! W 0i ðtÞ n j

endend

end

Notice that, by removing j from W 0iðtÞ, a set of energy

links interfering with the confirmed energy link will also beexcluded from future use. Hence, through the four-wayhandshake, only the energy links whose weight is largerthan its remaining interfering energy links will be selected.Hence we can argue that in each iteration the system utilityof the DGMS-based scheduling scheme is no less than thatof the CGMS-based scheme.

Moreover, since the maximum number of simulta-neously activated energy links is nlmax, it is expected thatafter nlmax rounds no more energy link can be activated.

This is because at least one energy link will be accepted inone iteration if there exists such an energy link. Likewise, ifan energy link is going to be admitted by the mobile edgecloud, in the worst case, it will be admitted after all its inter-fering energy links have been tried (and the correspondingtask offloading requests are not accepted). This will take atmost $max round. As a consequence, the maximum numberof iterations is confined to h.

In addition, a distributed scheduling algorithm requiresthat all MDs report their true energy link weights to theWDS. If a selfish MD, who wants to offload its own taskwith higher priority, report its energy link weight higherthan the true value. This MD will deprive other MDs com-puting resources, and resulting in decreased overall systemutility.

The issue, however, can be easily addressed by creatingan agent for each MD in one of its connected WDs orintroducing brokers as middleware to coordinate task off-loading between MDs and WDs. The agents or brokerswill then maintain the accurate weight information of theenergy links.

5 SYSTEM PERFORMANCE ANALYSIS

5.1 Implementation Complexity Analysis

According to (12), when qjðtÞ > uj, hjðtÞ ¼ 0. Hence

qjðtÞ � uj þ hmaxj ¼ emax

j þ fðv0Þ þ hmaxj ¼ zjðv0Þ: (19)

Here zjðv0Þ is the battery size of WD j. Notice that, thehigher the v0, the higher the battery size, and the higher theenergy drift at WD j.

For the CGMS-based scheduling scheme, a centralizedcontroller needs to collect the energy queue information ofall the WDs as well as the task offloading requests fromMDs. After selecting the set of energy links to be scheduled,the controller will pass the schedule to all the MDs andWDs, and the MDs will then offload the tasks to WDs. Thecomplexity of the algorithm is OðnlmaxÞ (recall that,nlmax ¼ maxIs2LSjIsj).

As to the DGMS-based scheduling scheme, in the firstpart of the control sub-frame, WDs will pass its energyqueue length information to its associated MDs. The algo-rithm then runs h iterations. In each iteration, 4 control sub-slots are required to finish the 4-way handshake task off-loading confirmation. In total, 1þ 4h control sub-slots areneeded and the complexity is OðhÞ. To be more specific, fora network with jNj ¼ 20, and the control sub-slot equals to50 ms, since h � jN j, the number of control sub-slot is upperbounded by 1þ 4jN j. If the frame length is 50 ms, the con-trol overhead is upper bounded by 8.01 percent. When thenetwork size increases, we can further partition the networkinto several non-overlapping regions and perform DGMSfor each region separately. How to partition the networkinto multiple regions is out of the scope of this work, andwill be discussed in our future work.

5.2 Utility Analysis

We first define the following terms:

(1) UOPT : the optimal system utility of the multi-usermulti-task offloading problem;

CHEN ET AL.: MULTI-USER MULTI-TASK COMPUTATION OFFLOADING IN GREEN MOBILE EDGE CLOUD COMPUTING 733

Page 9: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

(2) Uv0 : the optimal system utility of the Lyaponuv driftapproach with exhaustive search (the sechedule isderived from the optimal value of (18)); and

(3) UCGMSv0

=UDGMSv0

: the system utility of the CGMS/DGMS based scheduling scheme.

The performance gaps between the optimal multi-usermulti-task offloading scheduling algorithm and the CGMSor DGMS based scheduling scheme are revealed in the fol-lowing theorem.

Theorm 2. For a given v0, UCGMSv0

� UOPT

dmax� d

dmaxv0and

UDGMSv0

� UOPT

dmax� d

dmaxv0, where dmax ¼ max l2EL;Is2LSf gjIs \Nlj.

Proof. Please refer to Appendix B, available in the onlinesupplemental material. tuThe battery size is Oðv0Þ and the utility of both CGMS

and DGMS based scheme is Oð 1v0Þ. It implies a lower energystorage trades for a lower achievable utility, and vice versa.Thus, we can balance the system implementation complex-ity and the system performance by choosing an appropriatecontrol parameter v0.

Theorem 2 also implies that compared with a CGMS-based scheduling scheme, the DGMS-based schedulingscheme will not degrade the system performance. However,allowingWDs to schedule tasks fromMDs locally (a central-ized scheduler is no longer needed), several rounds of infor-mation exchange are required to update the resource usagestatus among MDs and WDs. Hence, DGMS-based schedul-ing scheme uses more control overhead to trade for lowerimplementation complexity. Moreover, as has been dis-cussed in the previous subsection, the scheduling overheadof the DGMS-based scheme is still relatively low. Thisimplies that the proposed DGMS-based scheduling schemeprovides an efficient alternative when a centralized control-ler is not available.

6 SIMULATIONS

6.1 Simulation Settings

We evaluate the system performance of the proposedCGMS and DGMS based multi-user multi-task offloadingscheme in this section. We randomly generate (5 MDs,10

WDs), (10 MDs, 20 WDs), (15 MDs, 30 WDs), (20 MDs,40 WDs) 4 topologies. The topologies to be tested areshown in Fig. 2.

The transmission range between a MD and a WD is 30meters, and a pair of (MD, WD) can communicate as long asthey are within each other’s transmission range. Once thetopology has been drawn, the transmission rate of each link

is determined as cij ¼ B log 2ð1þ ptiLi;j

n0Þ. Here B, the band-

width of the network is set to 20 MHz, and the noise powern0 is 90 dBm. Li;j ¼ dis’i;j is the path loss between MD i andWD j (with distance equals to disi;j and ’ is set to 3.5).

The energy and task arrival rate as well as the CPU proc-essing frequency are randomly generated with mean equalsto 0.9 W, 3 tasks/s and 2.2 GHz respectively. We then ran-domly generate 4 tuples to represent 4 different task types(refers to A1; A2; A3; A4 in Table 2) to be used in the simula-tion. In each frame, a MD will first determine whether it hasa task to be offloaded or not based on its task arrival rate. If atask arrives, it randomly selected a task type and send a taskoffloading request to the mobile edge cloud. The task statis-tics are in accordance to [11], since most of the energy areconsumed for computing the task but not transmitting it, therevenue of a task is set to uiðtÞ ¼ c0 � log ð1þ riðtÞdiðtÞ, andc0 ¼ 19:1 (so that the utility of A1 can be normalized to 1).The system parameters are listed in Table 2.

6.2 Simulation Results

In this sub-section,we first study the convergence speed of theproposed CGMS and DGMS based scheduling scheme. Wethen evaluate the system performances in regards to differentv0, energy arrival rates, task arrival rates, and CPU processingfrequency. To bemore specific, as it is not convenient to evalu-ate the impacts of the average task arrival rate on the overallsystem performance by simply changing the task arrival rateof one user (recall that we have multiplemobile devices in thesystem), we introduce task arrival rate aT . In detail, aT , is ascaler and is used to scale the task arrival rates of all themobile devices. Similarly, aE and aF are introduced as thescaling factor for the average energy arrival rates and the CPUprocessing frequencies of all thewireless devices.

Moreover, beside the proposed CGMS and DGMS basedmulti-user multi-task offloading algorithm, we also studythe performance of the optimal scheduling scheme and aRandom Scheduling (RS) scheme. The optimal schedulingscheme is realized by exhaustively searching all possiblescheduling policies and choose the best one in every frame.

Fig. 2. The topologies of the four tested networks.

TABLE 2System Parameters

Transmitting/receiving power 0.1W/0.07W(ptj=prj)

Noise (n0) 90 dBmPath loss parameter (’) 3.5Frame duration 50 msControl sub-slot duration 50 msg 0.1A1 ¼ r1;b1; r1; u1f g 29:5 KB; 0:50; 750:0; 1:0f gA2 ¼ r2;b2; r2; u2f g 27:6 KB; 0:63; 734:5; 0:99f gA3 ¼ r3;b3; r1; u3f g 32:8 KB; 0:69; 418:7; 0:97f gA4 ¼ r4;b4; r4; u4f g 29:7 KB; 0:27; 861:6; 1:0f g

734 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019

Page 10: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

However, since it is too time consuming to derive the opti-mal scheduling scheme for the (10 MDs, 20 WDs), (15 MDs,30 WDs), (20 MDs, 40 WDs) topologies, we only evaluatethe performance of the optimal scheduling scheme for the(5 MDs, 10 WDs) topology. In the RS scheme, a MD willbroadcast a task offloading request to all the WDs associ-ated with it. A WD will randomly select one task offloadingrequest and respond to it. If a MD receives enough respondsfrom its associated WDs, it will issue a task offloading con-firmation signal and the corresponding WDs will serve theMD in the task offloading sub-frame. Since the procedure issimilar to that of the DGMS-based scheme, we omit thedetails of the RS scheme.

6.2.1 Convergence Speed of the Proposed Greedy

Maximal Scheduling Based algorithms

Fig. 3 shows the convergence time of the bottleneck energyqueue for the tested 4 topologies in the proposed DGMS-based scheme. The bottleneck energy queue refers to theone with the highest average energy queue length. In thefour tested topologies, the energy arrival rate aE is set to1 (refers to the case when every WD has enough energyfor task offloading) and the task arrival rate aA is set to10 (refer to the case when every MD always needs to offloadits task) so as to maintain the energy queue length variationsat a high level. Moreover, the battery sizes are set to9.11J,12.61J,12.31J and 14.61J for the 4 networks respectivelyso that the maximum system utility can be achieved. FromFig. 3, we can observe that the 4 tested topologies will con-verge within 15s. Notice that, the convergence speed can befurther accelerated if we reduce v0, i.e., the battery size ofeach WD.

6.2.2 Battery Size versus the Overall System Utility

The impact of the battery storage on the overall system per-formance is revealed in Fig. 4. The maximum required bat-tery size is defined as zmaxðv0Þ ¼ maxj2W zjðv0Þ. The fourcurves in 4a represent the performance of 4 different sched-uling schemes. Note that, the red curve with cross marks(denoted as “Exhaustive”) shows the performance of theoptimal scheduling scheme. In the figure, the task arrivalscaling parameter is set to a large value (i.e., aA ¼ 10), theenergy arrival scaling parameter aE is 1, and the CPU fre-quency scaling parameter aF is 1.

From Fig. 4, we can observe that the higher the batterysize, the higher the overall system utility. When zmax =9.1J,12.6J,12.3J,14.6J, the overall system utility converges inthe four tested topologies respectively. Furthermore, as theconvergence performance of the CGMS-based scheduling

scheme is almost the same as that of the DGMS-basedscheduling scheme, we omit the convergence performanceof the CGMS-based scheduling scheme here for brevity.

6.2.3 Energy Arrival Rate versus the Overall

System Utility

In Fig. 5, the task arrival scaling parameter aA is set to 10and the CPU frequency scaling parameter aF is 1. Moreover,zmax is set to 9.1J,12.61J,12.31J and 14.61J for the four testedtopologies respectively. As shown in Fig. 5, the higher theenergy arrival rate (higher aE), the higher the computingability of each WDs, and the higher the overall system util-ity. However, when the energy arrival rate reaches to acertain point, the overall system utility converges. This isbecause initially as the energy arrival rate increases, the

Fig. 3. Convergence time of the 4 tested topologies.

Fig. 4. Battery size versus the overall system utility of the: (a). (5 MDs,10 WDs) network; (b). (10 MDs, 20 WDs) network; (c). (15 MDs,30 WDs) network; (d). (20 MDs, 40 WDs) network.

Fig. 5. Energy Arrival Scaling Parameter versus the overall system utilityof the: (a). (5 MDs, 10 WDs) network; (b). (10 MDs, 20 WDs) network;(c). (15 MDs, 30 WDs) network; (d). (20 MDs, 40 WDs) network.

CHEN ET AL.: MULTI-USER MULTI-TASK COMPUTATION OFFLOADING IN GREEN MOBILE EDGE CLOUD COMPUTING 735

Page 11: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

probability that WDs will be used for task offloading (theprobability that qjðtÞ � uj) increases, hence the overall sys-tem utility increases. When the energy arrival scalingparameter exceeds a certain value, we can expect that all thewireless devices tend to participate into task computationoffloading as often as possible (overly energy supply for allthe wireless devices).

We also observe that the proposed CGMS and DGMSbased scheduling schemes achieve almost the same perfor-mance. Moreover, at low energy regime (with very smallaE), the performance differences among four schemes arenot obvious. This is due to the reason that with low energysupply, the number of devices can be used for task offload-ing is limited, in this case, the four schemes can achieve sim-ilar performance. As the energy supply increases, theperformance gaps between the optimal scheme and CGMS/DGMS based scheduling schemes or the RS scheme becomelarger and larger. It implies that the optimal schedulingscheme can balance the energy usage and task offloading inthe most effective way. When the energy supply is highenough, the performance gaps between the optimal schemeand the CGMS/DGMS based scheduling schemes shrink.However, the performance gap between the RS and otherschemes remain significant. In conclusion, the optimalscheme shows at most 20.7 percent higher overall systemutility than that of the CGMS/DGMS based schedulingscheme. As to the RS scheme, in the four tested topologies,its overall system utility is at most 28.2, 28.1, 19.4, 31.9 per-cent lower than that of the proposed CGMS/DGMS basedscheduling scheme.

6.2.4 Task Arrival Rate versus the Overall

System Utility

Fig. 6 shows the impact of the task arrival rate on the overallsystem utility. Here the energy arrival scaling parameter aE

and the CPU frequency scaling parameter aF are set to 1,and v0 is set to 9.1J,12.6J,12.3J and 14.6J for the four testedtopologies respectively. From Fig. 6a, it is obvious that thehigher the task arrival rate, the more tasks to be served, and

the higher the overall system utility. In terms of the perfor-mance gaps among different schemes, we can observesimilar trends as explained in the previous subsection.Furthermore, the optimal scheme can achieve at most9.4 percent higher overall system utility than that of the pro-posed CGMS scheme from (5 MDs, 10 WDs) network. Like-wise, the proposed DGMS scheme achieves 31.6, 27.4, 18.8and 31.3 percent higher overall system utility than that ofthe RS scheme.

6.2.5 CPU Frequency versus the Overall System Utility

Fig. 7 shows the impact of different CPU processing fre-quencies on the overall system utility. Here the energyarrival scaling parameter aE and the task arrival scalingparameter aA are set to 1 and 10 respectively, and v0 is set to9.1J,12.6J,12.3J and 14.6J for the four tested topologiesrespectively. From Fig. 7a, we can expect that the optimalscheme can achieve at most 9.9 percent higher overallsystem utility than that of the proposed CGMS/DGMSbased scheduling scheme from (5 MDs, 10 WDs) network.Likewise, the proposed CGMD/DGMS based schedulingscheme achieve 24.1, 27.3, 16.4 and 32.8 percent higher over-all system utility than that of the RS scheme.

7 CONCLUSIONS

In this work, we discuss multi-user multi-task offloadingscheduling schemes in a renewable mobile edge cloud sys-tem. As the mobile edge cloud is composed of a set of WDswith relatively low processing ability (the processing abilityis not as high as that of a server), scheduling schemes needsto map the workload from a MD to multiple WDs. More-over, scheduling schemes also needs to handle uncertainenergy supply at each WD properly so as to make the bestof the mobile edge cloud system. In detail, the schedulingalgorithms determine the energy harvesting strategy, a setof offloading requests to be admitted, and a sub-set of WDsto compute the workload for the admitted offloadingrequest so as to maximize the overall system utility. To

Fig. 6. Task Arrival Scaling Parameter versus the overall system utility ofthe: (a). (5 MDs, 10 WDs) network; (b). (10 MDs, 20 WDs) network;(c). (15 MDs, 30 WDs) network; (d). (20 MDs, 40 WDs) network.

Fig. 7. CPU Frequency Scaling Parameter versus the overall systemutility of the: (a). (5 MDs, 10 WDs) network; (b). (10 MDs, 20 WDs) net-work; (c). (15 MDs, 30 WDs) network; (d). (20 MDs, 40 WDs) network.

736 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019

Page 12: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

exploit the computing capacity at the green mobile edgecloud thoroughly, the scheduling algorithms match the off-loading energy consumption at the mobile edge cloud to itsharvestable energy. Since the scheduling problem is NP-hard, centralized and distributed greedy maximal schedul-ing algorithms are proposed. As shown in Section 5, theproposed DGMS and CGMS algorithms can achieve at least1

dmaxof the overall system utility to that of the optimal sched-

uling scheme. Moreover, from the simulations, we canobserve that compared with a random scheduling scheme,the proposed DGMS and CGMS algorithm can provide 16.4to 32.8 percent higher overall system utility.

ACKNOWLEDGMENTS

This work was supported in part by the National NaturalScience Foundation of China (NSFC) under Grant 61702175.

REFERENCES

[1] P. Mach and Z. Becvar, “Mobile edge computing: A survey onarchitecture and computation offloading,” IEEE Commun. SurveysTuts., vol. 19, no. 3, pp. 1628–1656, Jul.-Sep. 2017.

[2] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief,“Mobile edge computing: Survey and research outlook,” arXiv:1701.01090, 2017, https://scholar.google.com/scholar?hl=en&as_sdt=0%2C24&q=Mobile+edge+computing%3A+Survey+and+research+outlook&btnG=

[3] M. Chen, Y. Hao, Y. Li, and C. F. Lai, “On the computation off-loading at ad hoc cloudlet: architecture and service modes,” IEEECommun. Mag., vol. 53, no. 6, pp. 18–24, Jun. 2015.

[4] X. Lin, Y. Wang, Q. Xie, and M. Pedram, “Task scheduling withdynamic voltage and frequency scaling for energy minimizationin the mobile cloud computing environment,” IEEE Trans. Serv.Comput., vol. 8, no. 2, pp. 175–186, Mar./Apr. 2015.

[5] S. E. Mahmoodi, R. N. Uma, and K. P. Subbalakshmi, “Optimaljoint scheduling and cloud offloading for mobile applications,”IEEE Trans. Cloud Comput, Apr. 2016.

[6] W. Chen, C. T. Lea, and K. Li, “Dynamic resource allocation inad-hoc mobile cloud computing,” in Proc. IEEE Wireless Commun.Netw. Conf., 2017, pp. 1–6.

[7] S. Zhang, J. Wu, and S. Lu, “Distributed workload disseminationfor makespan minimization in disruption tolerant networks,”IEEE Trans. Mobile Comput., vol. 15, no. 7, pp. 1661–1673,Jul. 2016.

[8] C. You, K. Huang, and H. Chae, “Energy efficient mobile cloudcomputing powered by wireless energy transfer,” IEEE J. Select.Areas Commun., vol. 34, no. 5, pp. 1757–1771, May 2016.

[9] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation off-loading for mobile-edge computing with energy harvestingdevices,” IEEE J. Select. Areas Commun., vol. 34, no. 12, pp. 3590–3605, Dec. 2016.

[10] T. Q. Dinh, J. Tang, Q. D. La, and T. Q. S. Quek, “Adaptive compu-tation scaling and task offloading in mobile edge computing,” inProc. IEEE Wireless Commun. Netw. Conf., 2017, pp. 1–6.

[11] S. Guo, B. Xiao, Y. Yang, and Y. Yang, “Energy-efficient dynamicoffloading and resource scheduling in mobile cloud computing,”in Proc. IEEE Int. Conf. Comput. Commun., 2016, pp. 1–9.

[12] L. Pu, X. Chen, J. Xu, and X. Fu, “D2d fogging: An energy-efficientand incentive-aware task offloading framework via network-assisted d2d collaboration,” IEEE J. Select. Areas Commun., vol. 34,no. 12, pp. 3887–3901, Dec. 2016.

[13] X. Chen, L. Pu, L. Gao, W. Wu, and D. Wu, “Exploiting massived2d collaboration for energy-efficient mobile edge computing,”IEEE Wireless Commun., vol. 24, no. 4, pp. 64–71, Aug. 2017.

[14] L. Yang, J. Cao, H. Cheng, and Y. Ji, “Multi-user computation par-titioning for latency sensitive mobile cloud applications,” IEEETrans. Comput., vol. 64, no. 8, pp. 2253–2266, Aug. 2015.

[15] M. Jia, W. Liang, Z. Xu, and M. Huang, “Cloudlet load balancingin wireless metropolitan area networks,” in Proc. IEEE Int. Conf.Comput. Commun., 2016, pp. 1–9.

[16] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computa-tion offloading for mobile-edge cloud computing,” IEEE/ACMTrans. Netw., vol. 24, no. 5, pp. 2795–2808, Oct. 2016.

[17] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Stochastic jointradio and computational resource management for multi-usermobile-edge computing systems,” IEEE Trans. Wireless Commun.,vol. 16, no. 9, pp. 5994–6009, Sep. 2017.

[18] J. Song, Y. Cui, M. Li, J. Qiu, and R. Buyya, “Energy-traffic tradeoffcooperative offloading for mobile cloud computing,” in Proc. IEEE22nd Int. Symp. Quality Serv., 2014, pp. 284–289.

[19] J. Xu and S. Ren, “Online learning for offloading and autoscalingin renewable-powered mobile edge computing,” in Proc. IEEEGlobal Commun. Conf., 2016, pp. 1–6.

[20] S. Jovsilo and G. Dan, “A game theoretic analysis of selfish mobilecomputation offloading,” in Proc. IEEE Conf. Comput. Commun.,2017, pp. 1–9.

[21] Y. Li and W. Wang, “Can mobile cloudlets support mobileapplications?” in Proc. IEEE Conf. Comput. Commun., 2014,pp. 1060–1068.

[22] J. Chase and D. Niyato, “Joint optimization of resource provision-ing in cloud computing,” IEEE Trans. Serv. Comput., vol. 10, no. 3,pp. 396–409, May/Jun. 2017.

[23] T. X. Tran and D. Pompili, “Joint task offloading and resourceallocation for multi-server mobile-edge computing networks,”arXiv: 1705.00704, 2017.

[24] F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and comput-ing optimization in wireless powered mobile-edge computingsystems,” IEEE Trans. Wireless Commun., vol. 17, no. 3, Mar. 2018.

[25] K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femtoclouds: Leveraging mobile devices to provide cloud service at theedge,” in Proc. IEEE 8th Int. Conf. Cloud Comput., 2015, pp. 9–16.

[26] D.-J. Deng, K.-C. Chen, and R.-S. Cheng, “IEEE 802.11 ax: Nextgeneration wireless local area networks,” in Proc. 10th Int. Conf.Heterogeneous Netw. Quality, Rel. Secur. Robustness, 2014, pp. 77–82.

[27] M. Neely, “Stochastic network optimization with application tocommunication and queueing systems,” Synthesis Lectures Com-mun. Netw., vol. 3, no. 1, pp. 211, 2010.

[28] S. Sakai, M. Togasaki, and K. Yamazaki, “A note on greedy algo-rithms for the maximum weighted independent set problem,”Discrete Appl. Math., vol. 126, no. 2, pp. 313–322, 2003.

[29] C. Joo, X. Lin, J. Ryu, and N. B. Shroff, “Distributed greedyapproximation to maximum weighted independent set for sched-uling with fading channels,” IEEE/ACM Trans. Netw., vol. 24,no. 3, pp. 1476–1488, Jun. 2016.

Weiwei Chen received the BS degree in electronicengineering from the Beijing University of Postsand Telecommunications University, Beijing,China, in 2007, and the PhD degree from the HongKong University of Science and Technology, HongKong, in 2013. She is nowwith the College of Com-puter Science and Electronic Engineering, HunanUniversity, China. Her research interests includeresource allocation in mobile edge computing,wireless networking for IoT systems, future Internetarchitecture.

CHEN ET AL.: MULTI-USER MULTI-TASK COMPUTATION OFFLOADING IN GREEN MOBILE EDGE CLOUD COMPUTING 737

Page 13: Multi-User Multi-Task Computation Offloading in Green ...lik/publications/Weiwei-Chen-IEEE-TSC-2019.pdfcussed in [2] and [19] use renewable energy to realize green computing. Likewise,

Dong Wang received the BS degree and PhDdegree in computer science from Hunan Univer-sity, in 1986 and 2006. Since 1986, he has beenworking with Hunan University, China. Currently,he is a professor with Hunan University. His mainresearch interests include network test and per-formance evaluation, wireless communications,and mobile computing, etc.

Keqin Li is a SUNY distinguished professor ofcomputer science with the State University ofNew York. He is also a distinguished professor ofChinese National Recruitment Program of GlobalExperts (1000 Plan), Hunan University, China.He was an intellectual ventures endowed visitingchair professor with the National Laboratory forInformation Science and Technology, TsinghuaUniversity, Beijing, China, during 2011-2014. Hiscurrent research interests include parallel com-puting and high performance computing, distrib-

uted computing, energy efficient computing and communication,heterogeneous computing systems, cloud computing, big data comput-ing, CPU-GPU hybrid and cooperative computing, multicore computing,storage and file systems, wireless communication networks, sensor net-works, peer-to-peer file sharing systems, mobile computing, servicecomputing, Internet of things and cyber-physical systems. He has pub-lished more than 550 journal articles, book chapters, and refereed con-ference papers, and has received several best paper awards. He iscurrently serving or has served on the editorial boards of the IEEETransactions on Parallel and Distributed Systems, the IEEE Transac-tions on Computers, IEEE Transactions on Cloud Computing, the IEEETransactions on Services Computing, and the IEEE Transactions onSustainable Computing. He is a fellow of the IEEE.

738 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 12, NO. 5, SEPTEMBER/OCTOBER 2019