View
2.161
Download
0
Category
Preview:
DESCRIPTION
Typical Soft Technologies is one of the leading Software Company located in Chennai that offers best Quality projects and training to all customers. We also deliver all the finest Projects at many Students, Companies. Our Computer Courses Provides best future for the Students. If you want any of our projects & Courses, then contact us. Admin@typical.in 044-43555140, 093443 99926.
Citation preview
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 1 9344399918/26
MOBILE COMPUTING
1. ALERT: An Anonymous Location-Based Efficient Routing Protocol in
MANETs
Abstract :
Mobile Ad Hoc Networks (MANETs) use anonymous routing protocols that hide node
identities and/or routes from outside observers in order to provide anonymity protection.
However, existing anonymous routing protocols relying on either hop-by-hop encryption or
redundant traffic, either generate high cost or cannot provide full anonymity protection to data
sources, destinations, and routes. The high cost exacerbates the inherent resource constraint
problem in MANETs especially in multimedia wireless applications. To offer high anonymity
protection at a low cost, we propose an Anonymous Location-based Efficient Routing proTocol
(ALERT). ALERT dynamically partitions the network field into zones and randomly chooses
nodes in zones as intermediate relay nodes, which form a nontraceable anonymous route. In
addition, it hides the data initiator/receiver among many initiators/receivers to strengthen source
and destination anonymity protection. Thus, ALERT offers anonymity protection to sources,
destinations, and routes. It also has strategies to effectively counter intersection and timing
attacks. We theoretically analyze ALERT in terms of anonymity and efficiency. Experimental
results exhibit consistency with the theoretical analysis, and show that ALERT achieves better
route anonymity protection and lower cost compared to other anonymous routing protocols.
Also, ALERT achieves comparable routing efficiency to the GPSR geographical routing
protocol.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 2 9344399918/26
2. DSS: Distributed SINR-Based Scheduling Algorithm for Multihop
Wireless Networks
Abstract :
The problem of developing distributed scheduling algorithms for high throughput in
multihop wireless networks has been extensively studied in recent years. The design of a
distributed low-complexity scheduling algorithm becomes even more challenging when taking
into account a physical interference model, which requires the SINR at a receiver to be checked
when making scheduling decisions. To do so, we need to check whether a transmission failure is
caused by interference due to simultaneous transmissions from distant nodes. In this paper, we
propose a scheduling algorithm under a physical interference model, which is amenable to
distributed implementation with 802.11 CSMA technologies. The proposed scheduling algorithm
is shown to achieve throughput optimality. We present two variations of the algorithm to
enhance the delay performance and to reduce the control overhead, respectively, while retaining
throughput optimality.
3. Toward Accurate Mobile Sensor Network Localization in Noisy
Environments
Abstract :
The node localization problem in mobile sensor networks has received significant attention.
Recently, particle filters adapted from robotics have produced good localization accuracies in
conventional settings. In spite of these successes, state-of-theart solutions suffer significantly
when used in challenging indoor and mobile environments characterized by a high degree of
radio signal irregularity. New solutions are needed to address these challenges. We propose a
fuzzy logic-based approach for mobile node localization in challenging environments.
Localization is formulated as a fuzzy multilateration problem. For sparse networks with few
available anchors, we propose a fuzzy grid-prediction scheme. The fuzzy logic-based
localization scheme is implemented in a simulator and compared to state-of-the-art solutions.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 3 9344399918/26
Extensive simulation results demonstrate improvements in the localization accuracy from 20 to
40 percent when the radio irregularity is high. A hardware implementation running on Epic
motes and transported by iRobot mobile hosts confirms simulation results and extends them to
the real world.
4. Adaptive Duty Cycle Control with Queue Management in Wireless Sensor
Networks
Abstract :
This paper proposes a control-based approach to the duty cycle adaptation for wireless sensor
networks. The proposed method controls the duty cycle through the queue management to
achieve high-performance under variable traffic rates. To have energy efficiency while
minimizing the delay, we design a feedback controller, which adapts the sleep time to the traffic
change dynamically by constraining the queue length at a predetermined value. In addition, we
propose an efficient synchronization scheme using an active pattern, which represents the active
time slot schedule for synchronization among sensor nodes, without affecting neighboring
schedules. Based on the control theory, we analyze the adaptation behavior of the proposed
controller and demonstrate system stability. The simulation results show that the proposed
method outperforms existing schemes by achieving more power savings while minimizing the
delay.
5. Cooperative Packet Delivery in Hybrid Wireless Mobile Networks: A
Coalitional Game Approach
Abstract :
We consider the problem of cooperative packet delivery to mobile nodes in a hybrid wireless
mobile network, where both infrastructure-based and infrastructure-less (i.e., ad hoc mode or
peer-to-peer mode) communications are used. We propose a solution based on a coalition
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 4 9344399918/26
formation among mobile nodes to cooperatively deliver packets among these mobile nodes in the
same coalition. A coalitional game is developed to analyze the behavior of the rational mobile
nodes for cooperative packet delivery. A group of mobile nodes makes a decision to join or to
leave a coalition based on their individual payoffs. The individual payoff of each mobile node is
a function of the average delivery delay for packets transmitted to the mobile node from a base
station and the cost incurred by this mobile node for relaying packets to other mobile nodes. To
find the payoff of each mobile node, a Markov chain model is formulated and the expected cost
and packet delivery delay are obtained when the mobile node is in a coalition. Since both the
expected cost and packet delivery delay depend on the probability that each mobile node will
help other mobile nodes in the same coalition to forward packets to the destination mobile node
in the same coalition, a bargaining game is used to find the optimal helping probabilities. After
the payoff of each mobile node is obtained, we find the solutions of the coalitional game which
are the stable coalitions. A distributed algorithm is presented to obtain the stable coalitions and a
Markov-chain-based analysis is used to evaluate the stable coalitional structures obtained from
the distributed algorithm. Performance evaluation results show that when the stable coalitions are
formed, the mobile nodes achieve a nonzero payoff (i.e., utility is higher than the cost). With a
coalition formation, the mobile nodes achieve higher payoff than that when each mobile node
acts alone.
6. VAPR: Void-Aware Pressure Routing for Underwater Sensor Networks
Abstract :
Underwater mobile sensor networks have recently been proposed as a way to explore and
observe the ocean, providing 4D (space and time) monitoring of underwater environments. We
consider a specialized geographic routing problem called pressure routing that directs a packet to
any sonobuoy on the surface based on depth information available from on-board pressure
gauges. The main challenge of pressure routing in sparse underwater networks has been the
efficient handling of 3D voids. In this respect, it was recently proven that the greedy stateless
perimeter routing method, very popular in 2D networks, cannot be extended to void recovery in
3D networks. Available heuristics for 3D void recovery require expensive flooding. In this paper,
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 5 9344399918/26
we propose a Void-Aware Pressure Routing (VAPR) protocol that uses sequence number, hop
count and depth information embedded in periodic beacons to set up nexthop direction and to
build a directional trail to the closest sonobuoy. Using this trail, opportunistic directional
forwarding can be efficiently performed even in the presence of voids. The contribution of this
paper is twofold: 1) a robust soft-state routing protocol that supports opportunistic directional
forwarding; and 2) a new framework to attain loop freedom in static and mobile underwater
networks to guarantee packet delivery. Extensive simulation results show that VAPR
outperforms existing solutions.
7. DCIM: Distributed Cache Invalidation Method for Maintaining Cache
Consistency in Wireless Mobile Networks
Abstract :
This paper proposes distributed cache invalidation mechanism (DCIM), a client-based cache
consistency scheme that is implemented on top of a previously proposed architecture for caching
data items in mobile ad hoc networks (MANETs), namely COACS, where special nodes cache
the queries and the addresses of the nodes that store the responses to these queries. We have also
previously proposed a server-based consistency scheme, named SSUM, whereas in this paper,
we introduce DCIM that is totally client-based. DCIM is a pull-based algorithm that implements
adaptive time to live (TTL), piggybacking, and prefetching, and provides near strong consistency
capabilities. Cached data items are assigned adaptive TTL values that correspond to their update
rates at the data source, where items with expired TTL values are grouped in validation requests
to the data source to refresh them, whereas unexpired ones but with high request rates are
prefetched from the server. In this paper, DCIM is analyzed to assess the delay and bandwidth
gains (or costs) when compared to polling every time and push-based schemes. DCIM was also
implemented using ns2, and compared against client-based and server-based schemes to assess
its performance experimentally. The consistency ratio, delay, and overhead traffic are reported
versus several variables, where DCIM showed to be superior when compared to the other
systems.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 6 9344399918/26
8. Cross-Layer Minimum-Delay Scheduling and Maximum-Throughput
Resource Allocation for Multiuser Cognitive Networks
Abstract :
A cognitive network is considered that consists of a base station (BS) communicating with
multiple primary and secondary users. Each secondary user can access only one of the
orthogonal primary channels. A model is considered in which the primary users can tolerate a
certain average delay. A special case is also considered in which the primary users do not suffer
from any delay. A novel cross-layer scheme is proposed in which the BS performs successive
interference cancellation and thus a secondary user can coexist with an active primary user
without adversely affecting its transmission. A scheduling algorithm is proposed that minimizes
the average packet delay of the secondary user under constraints on the average power
transmitted by the secondary user and the average packet delay of the primary user. A resource
allocation algorithm is also proposed to assign the secondary users’ channels such that the total
throughput of the network is maximized. Our results indicate that the network throughput
increases significantly by increasing the number of transmitted packets of the secondary users
and/or by allowing a small delay for the primary user packets.
9. Scheduling Partition for Order Optimal Capacity in Large-Scale Wireless
Networks
Abstract :
The capacity scaling property specifies the change of network throughput when network size
increases. It serves as an essential performance metric in large-scale wireless networks. Existing
results have been obtained based on the assumption of using a globally planned link transmission
schedule in the network, which is however not feasible in large wireless networks due to the
scheduling complexity. The gap between the well-known capacity results and the infeasible
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 7 9344399918/26
assumption on link scheduling potentially undermines our understanding of the achievable
network capacity. In this paper, we propose the scheduling partition methodology that
decomposes a large network into small autonomous scheduling zones and implements a localized
scheduling algorithm independently in each partition. We prove the sufficient and the necessary
conditions for the scheduling partition approach to achieve the same order of capacity as the
widely assumed global scheduling strategy. In comparison to the network dimension ffiffiffi n p ,
scheduling partition size �ðrðnÞÞ is sufficient to obtain the optimal capacity scaling, where rðnÞ
is the node transmission radius and much smaller than ffiffiffi n p . We finally propose a
distributed partition protocol and a localized scheduling algorithm as our scheduling solution for
maximum capacity in large wireless networks.
10. Video On-Demand Streaming in Cognitive Wireless Mesh Networks
Abstract :
Cognitive radio (CR), which enables dynamic access of underutilized licensed spectrums, is a
promising technology for more efficient spectrum utilization. Since cognitive radio enables the
access of larger amount of spectrum, it can be used to build wireless mesh networks with higher
network capacity, and thus provide better quality of services for high bit-rate applications. In this
paper, we study the multisource video on-demand application in multi-interface cognitive
wireless mesh networks. Given a video request, we find a joint multipath routing and spectrum
allocation for the session to minimize its total bandwidth cost in the network, and therefore
maximize the number of sessions the network can support. We propose both distributed and
centralized routing and channel allocation algorithms to solve the problem. Simulation results
show that our algorithms increase the maximum number of concurrent sessions that can be
supported in the network, and also improve each session’s performance with regard to spectrum
mobility.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 8 9344399918/26
11. Relay Selection for Geographical Forwarding in Sleep-Wake Cycling
Wireless Sensor Networks
Abstract :
Our work is motivated by geographical forwarding of sporadic alarm packets to a base
station in a wireless sensor network (WSN), where the nodes are sleep-wake cycling periodically
and asynchronously. We seek to develop local forwarding algorithms that can be tuned so as to
tradeoff the end-to-end delay against a total cost, such as the hop count or total energy. Our
approach is to solve, at each forwarding node enroute to the sink, the local forwarding problem
of minimizing one-hop waiting delay subject to a lower bound constraint on a suitable reward
offered by the next-hop relay; the constraint serves to tune the tradeoff. The reward metric used
for the local problem is based on the end-to-end total cost objective (for instance, when the total
cost is hop count, we choose to use the progress toward sink made by a relay as the reward). The
forwarding node, to begin with, is uncertain about the number of relays, their wake-up times, and
the reward values, but knows the probability distributions of these quantities. At each relay
wake-up instant, when a relay reveals its reward value, the forwarding node’s problem is to
forward the packet or to wait for further relays to wake-up. In terms of the operations research
literature, our work can be considered as a variant of the asset selling problem. We formulate our
local forwarding problem as a partially observable Markov decision process (POMDP) and
obtain inner and outer bounds for the optimal policy. Motivated by the computational complexity
involved in the policies derived out of these bounds, we formulate an alternate simplified model,
the optimal policy for which is a simple threshold rule. We provide simulation results to compare
the performance of the inner and outer bound policies against the simple policy, and also against
the optimal policy when the source knows the exact number of relays. Observing the good
performance and the ease of implementation of the simple policy, we apply it to our motivating
problem, i.e., local geographical routing of sporadic alarm packets in a large WSN. We compare
the end-to-end performance (i.e., average total delay and average total cost) obtained by the
simple policy, when used for local geographical forwarding, against that obtained by the globally
optimal forwarding algorithm proposed by Kim.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 9 9344399918/26
12. Adaptive Position Update for Geographic Routing in Mobile Ad Hoc
Networks
Abstract :
In geographic routing, nodes need to maintain up-to-date positions of their immediate
neighbors for making effective forwarding decisions. Periodic broadcasting of beacon packets
that contain the geographic location coordinates of the nodes is a popular method used by most
geographic routing protocols to maintain neighbor positions. We contend and demonstrate that
periodic beaconing regardless of the node mobility and traffic patterns in the network is not
attractive from both update cost and routing performance points of view. We propose the
Adaptive Position Update (APU) strategy for geographic routing, which dynamically adjusts the
frequency of position updates based on the mobility dynamics of the nodes and the forwarding
patterns in the network. APU is based on two simple principles: 1) nodes whose movements are
harder to predict update their positions more frequently (and vice versa), and (ii) nodes closer to
forwarding paths update their positions more frequently (and vice versa). Our theoretical
analysis, which is validated by NS2 simulations of a well-known geographic routing protocol,
Greedy Perimeter Stateless Routing Protocol (GPSR), shows that APU can significantly reduce
the update cost and improve the routing performance in terms of packet delivery ratio and
average end-to-end delay in comparison with periodic beaconing and other recently proposed
updating schemes. The benefits of APU are further confirmed by undertaking evaluations in
realistic network scenarios, which account for localization error, realistic radio propagation, and
sparse network.
13. Channel Allocation and Routing in Hybrid Multichannel Multiradio
Wireless Mesh Networks
Abstract :
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 10 9344399918/26
Many efforts have been devoted to maximizing network throughput in a multichannel
multiradio wireless mesh network. Most current solutions are based on either purely static or
purely dynamic channel allocation approaches. In this paper, we propose a hybrid multichannel
multiradio wireless mesh networking architecture, where each mesh node has both static and
dynamic interfaces. We first present an Adaptive Dynamic Channel Allocation protocol
(ADCA), which considers optimization for both throughput and delay in the channel assignment.
In addition, we also propose an Interference and Congestion Aware Routing protocol (ICAR) in
the hybrid network with both static and dynamic links, which balances the channel usage in the
network. Our simulation results show that compared to previous works, ADCA reduces the
packet delay considerably without degrading the network throughput. The hybrid architecture
shows much better adaptivity to changing traffic than purely static architecture without dramatic
increase in overhead, and achieves lower delay than existing approaches for hybrid networks.
14. Toward Privacy Preserving and Collusion Resistance in a Location Proof
Updating System
Abstract :
Today’s location-sensitive service relies on user’s mobile device to determine the current
location. This allows malicious users to access a restricted resource or provide bogus alibis by
cheating on their locations. To address this issue, we propose A Privacy-Preserving LocAtion
proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices
mutually generate location proofs and send updates to a location proof server. Periodically
changed pseudonyms are used by the mobile devices to protect source location privacy from
each other, and from the untrusted location proof server. We also develop user-centric location
privacy model in which individual users evaluate their location privacy levels and decide
whether and when to accept the location proof requests. In order to defend against colluding
attacks, we also present betweenness ranking-based and correlation clustering-based approaches
for outlier detection. APPLAUS can be implemented with existing network infrastructure, and
can be easily deployed in Bluetooth enabled mobile devices with little computation or power
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 11 9344399918/26
cost. Extensive experimental results show that APPLAUS can effectively provide location
proofs, significantly preserve the source location privacy, and effectively detect colluding
attacks.
15. SSD: A Robust RF Location Fingerprint Addressing Mobile Devices’
Heterogeneity
Abstract :
Fingerprint-based methods are widely adopted for indoor localization purpose because of
their cost-effectiveness compared to other infrastructure-based positioning systems. However,
the popular location fingerprint, Received Signal Strength (RSS), is observed to differ
significantly across different devices’ hardware even under the same wireless conditions. We
derive analytically a robust location fingerprint definition, the Signal Strength Difference (SSD),
and verify its performance experimentally using a number of different mobile devices with
heterogeneous hardware. Our experiments have also considered both Wi-Fi and Bluetooth
devices, as well as both Access-Point(AP)-based localization and Mobile-Node (MN)-assisted
localization. We present the results of two well-known localization algorithms (K Nearest
Neighbor and Bayesian Inference) when our proposed fingerprint is used, and demonstrate its
robustness when the testing device differs from the training device. We also compare these SSD-
based localization algorithms’ performance against that of two other approaches in the literature
that are designed to mitigate the effects of mobile node hardware variations, and show that SSD-
based algorithms have better accuracy.
16. EMAP: Expedite Message Authentication Protocol for Vehicular Ad Hoc
Networks
Abstract :
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 12 9344399918/26
Vehicular ad hoc networks (VANETs) adopt the Public Key Infrastructure (PKI) and
Certificate Revocation Lists (CRLs) for their security. In any PKI system, the authentication of a
received message is performed by checking if the certificate of the sender is included in the
current CRL, and verifying the authenticity of the certificate and signature of the sender. In this
paper, we propose an Expedite Message Authentication Protocol (EMAP) for VANETs, which
replaces the time-consuming CRL checking process by an efficient revocation checking process.
The revocation check process in EMAP uses a keyed Hash Message Authentication Code
ðHMACÞ, where the key used in calculating theHMAC is shared only between nonrevoked On-
Board Units (OBUs). In addition, EMAP uses a novel probabilistic key distribution, which
enables nonrevoked OBUs to securely share and update a secret key. EMAP can significantly
decrease the message loss ratio due to the message verification delay compared with the
conventional authentication methods employing CRL. By conducting security analysis and
performance evaluation,EMAP is demonstrated to be secure and efficient.
17. Channel Assignment for Throughput Optimization in Multichannel
Multiradio Wireless Mesh Networks Using Network Coding
Abstract :
Compared to single-hop networks such as WiFi, multihop infrastructure wireless mesh
networks (WMNs) can potentially embrace the broadcast benefits of a wireless medium in a
more flexible manner. Rather than being point-to-point, links in the WMNs may originate from a
single node and reach more than one other node. Nodes located farther than a one-hop distance
and overhearing such transmissions may opportunistically help relay packets for previous hops.
This phenomenon is called opportunistic overhearing/ listening. With multiple radios, a node can
also improve its capacity by transmitting over multiple radios simultaneously using orthogonal
channels. Capitalizing on these potential advantages requires effective routing and efficient
mapping of channels to radios (channel assignment (CA)). While efficient channel assignment
can greatly reduce interference from nearby transmitters, effective routing can potentially relieve
congestion on paths to the infrastructure. Routing, however, requires that only packets pertaining
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 13 9344399918/26
to a particular connection be routed on a predetermined route. Random network coding (RNC)
breaks this constraint by allowing nodes to randomly mix packets overheard so far before
forwarding. A relay node thus only needs to know how many packets, and not which packets, it
should send. We mathematically formulate the joint problem of random network coding, channel
assignment, and broadcast link scheduling, taking into account opportunistic overhearing, the
interference constraints, the coding constraints, the number of orthogonal channels, the number
of radios per node, and fairness among unicast connections. Based on this formulation, we
develop a suboptimal, auction-based solution for overall network throughput optimization.
Performance evaluation results show that our algorithm can effectively exploit multiple radios
and channels and can cope with fairness issues arising from auctions. Our algorithm also shows
promising gains over traditional routing solutions in which various channel assignment strategies
are used.
18. Content Sharing over Smartphone-Based Delay-Tolerant Networks
Abstract :
With the growing number of smartphone users, peer-to-peer ad hoc content sharing is
expected to occur more often. Thus, new content sharing mechanisms should be developed as
traditional data delivery schemes are not efficient for content sharing due to the sporadic
connectivity between smartphones. To accomplish data delivery in such challenging
environments, researchers have proposed the use of store-carry-forward protocols, in which a
node stores a message and carries it until a forwarding opportunity arises through an encounter
with other nodes. Most previous works in this field have focused on the prediction of whether
two nodes would encounter each other, without considering the place and time of the encounter.
In this paper, we propose discover-predict-deliver as an efficient content sharing scheme for
delay-tolerant smartphone networks. In our proposed scheme, contents are shared using the
mobility information of individuals. Specifically, our approach employs a mobility learning
algorithm to identify places indoors and outdoors. A hidden Markov model is used to predict an
individual’s future mobility information. Evaluation based on real traces indicates that with the
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 14 9344399918/26
proposed approach, 87 percent of contents can be correctly discovered and delivered within 2
hours when the content is available only in 30 percent of nodes in the network. We implement a
sample application on commercial smartphones, and we validate its efficiency to analyze the
practical feasibility of the content sharing application. Our system approximately results in a 2
percent CPU overhead and reduces the battery lifetime of a smartphone by 15 percent at most.
19. Discovery and Verification of Neighbor Positions in Mobile Ad Hoc
Networks
Abstract :
A growing number of ad hoc networking protocols and location-aware services require that
mobile nodes learn the position of their neighbors. However, such a process can be easily abused
or disrupted by adversarial nodes. In absence of a priori trusted nodes, the discovery and
verification of neighbor positions presents challenges that have been scarcely investigated in the
literature. In this paper, we address this open issue by proposing a fully distributed cooperative
solution that is robust against independent and colluding adversaries, and can be impaired only
by an overwhelming presence of adversaries. Results show that our protocol can thwart more
than 99 percent of the attacks under the best possible conditions for the adversaries, with
minimal false positive rates.
20. Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks
Abstract :
Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such
as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge
faced by data-intensive WSNs is to transmit all the data generated within an application’s
lifetime to the base station despite the fact that sensor nodes have limited power supplies. We
propose using lowcost disposable mobile relays to reduce the energy consumption of data-
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 15 9344399918/26
intensive WSNs. Our approach differs from previous work in two main aspects. First, it does not
require complex motion planning of mobile nodes, so it can be implemented on a number of low-
cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility
and wireless transmissions into a holistic optimization framework. Our framework consists of
three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes
can move. The second algorithm improves the topology of the routing tree by greedily adding
new nodes exploiting mobility of the newly added nodes. The third algorithm improves the
routing tree by relocating its nodes without changing its topology. This iterative algorithm
converges on the optimal position for each node given the constraint that the routing tree
topology does not change. We present efficient distributed implementations for each algorithm
that require only limited, localized synchronization. Because we do not necessarily compute an
optimal topology, our final routing tree is not necessarily optimal. However, our simulation
results show that our algorithms significantly outperform the best existing solutions.
21. Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks
Abstract :
Ad hoc low-power wireless networks are an exciting research direction in sensing and
pervasive computing. Prior security work in this area has focused primarily on denial of
communication at the routing or medium access control levels. This paper explores resource
depletion attacks at the routing protocol layer, which permanently disable networks by quickly
draining nodes’ battery power. These ―Vampire‖ attacks are not specific to any specific protocol,
but rather rely on the properties of many popular classes of routing protocols. We find that all
examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect,
and are easy to carry out using as few as one malicious insider sending only protocol-compliant
messages. In the worst case, a single Vampire can increase network-wide energy usage by a
factor of OðNÞ, where N in the number of network nodes. We discuss methods to mitigate these
types of attacks, including a new proof-of-concept protocol that provably bounds the damage
caused by Vampires during the packet forwarding phase.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 16 9344399918/26
CLOUD COMPUTING
1. Optimal Multiserver Configuration for Profit Maximization in Cloud
Computing
Abstract :
As cloud computing becomes more and more popular, understanding the economics of cloud
computing becomes critically important. To maximize the profit, a service provider should
understand both service charges and business costs, and how they are determined by the
characteristics of the applications and the configuration of a multiserver system. The problem of
optimal multiserver configuration for profit maximization in a cloud computing environment is
studied. Our pricing model takes such factors into considerations as the amount of a service, the
workload of an application environment, the configuration of a multiserver system, the service-
level agreement, the satisfaction of a consumer, the quality of a service, the penalty of a low-
quality service, the cost of renting, the cost of energy consumption, and a service provider’s
margin and profit. Our approach is to treat a multiserver system as an M/M/m queuing model,
such that our optimization problem can be formulated and solved analytically. Two server speed
and power consumption models are considered, namely, the idle-speed model and the constant-
speed model. The probability density function of the waiting time of a newly arrived service
request is derived. The expected service charge to a service request is calculated. The expected
net business gain in one unit of time is obtained. Numerical calculations of the optimal server
size and the optimal server speed are demonstrated.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 17 9344399918/26
2. Efficient Resource Mapping Framework over Networked Clouds via
Iterated Local Search-Based Request Partitioning
Abstract :
The cloud represents a computing paradigm where shared configurable resources are
provided as a service over the Internet. Adding intra- or intercloud communication resources to
the resource mix leads to a networked cloud computing environment. Following the cloud
infrastructure as a Service paradigm and in order to create a flexible management framework, it
is of paramount importance to address efficiently the resource mapping problem within this
context. To deal with the inherent complexity and scalability issue of the resource mapping
problem across different administrative domains, in this paper a hierarchical framework is
described. First, a novel request partitioning approach based on Iterated Local Search is
introduced that facilitates the cost-efficient and online splitting of user requests among eligible
cloud service providers (CPs) within a networked cloud environment. Following and capitalizing
on the outcome of the request partitioning phase, the embedding phase—where the actual
mapping of requested virtual to physical resources is performed can be realized through the use
of a distributed intracloud resource mapping approach that allows for efficient and balanced
allocation of cloud resources. Finally, a thorough evaluation of the proposed overall framework
on a simulated networked cloud environment is provided and critically compared against an
exact request partitioning solution as well as another common intradomain virtual resource
embedding solution.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 18 9344399918/26
3. Harnessing the Cloud for Securely Outsourcing Large-Scale Systems of
Linear Equations
Abstract :
Cloud computing economically enables customers with limited computational resources to
outsource large-scale computations to the cloud. However, how to protect customers’
confidential data involved in the computations then becomes a major security concern. In this
paper, we present a secure outsourcing mechanism for solving large-scale systems of linear
equations (LE) in cloud. Because applying traditional approaches like Gaussian elimination or
LU decomposition (aka. direct method) to such large-scale LEs would be prohibitively
expensive, we build the secure LE outsourcing mechanism via a completely different approach—
iterative method, which is much easier to implement in practice and only demands relatively
simpler matrix-vector operations. Specifically, our mechanism enables a customer to securely
harness the cloud for iteratively finding successive approximations to the LE solution, while
keeping both the sensitive input and output of the computation private. For robust cheating
detection, we further explore the algebraic property of matrix-vector operations and propose an
efficient result verification mechanism, which allows the customer to verify all answers received
from previous iterative approximations in one batch with high probability. Thorough security
analysis and prototype experiments on Amazon EC2 demonstrate the validity and practicality of
our proposed design.
4. QoS Ranking Prediction for Cloud Services
Abstract :
Cloud computing is becoming popular. Building high-quality cloud applications is a critical
research problem. QoS rankings provide valuable information for making optimal cloud service
selection from a set of functionally equivalent service candidates. To obtain QoS values, real-
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 19 9344399918/26
world invocations on the service candidates are usually required. To avoid the time-consuming
and expensive real-world service invocations, this paper proposes a QoS ranking prediction
framework for cloud services by taking advantage of the past service usage experiences of other
consumers. Our proposed framework requires no additional invocations of cloud services when
making QoS ranking prediction. Two personalized QoS ranking prediction approaches are
proposed to predict the QoS rankings directly. Comprehensive experiments are conducted
employing real-world QoS data, including 300 distributed users and 500 realworld web services
all over the world. The experimental results show that our approaches outperform other
competing approaches.
5. Cloudy with a Chance of Cost Savings
Abstract :
Cloud-based hosting is claimed to possess many advantages over traditional in-house (on-
premise) hosting such as better scalability, ease of management, and cost savings. It is not
difficult to understand how cloud-based hosting can be used to address some of the existing
limitations and extend the capabilities of many types of applications. However, one of the most
important questions is whether cloud-based hosting will be economically feasible for my
application if migrated into the cloud. It is not straightforward to answer this question because it
is not clear how my application will benefit from the claimed advantages, and, in turn, be able to
convert them into tangible cost savings. Within cloud-based hosting offerings, there is a wide
range of hosting options one can choose from, each impacting the cost in a different way.
Answering these questions requires an in-depth understanding of the cost implications of all the
possible choices specific to my circumstances. In this study, we identify a diverse set of key
factors affecting the costs of deployment choices. Using benchmarks representing two different
applications (TPC-W and TPC-E) we investigate the evolution of costs for different deployment
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 20 9344399918/26
choices. We consider important application characteristics such as workload intensity, growth
rate, traffic size, storage, and software license to understand their impact on the overall costs. We
also discuss the impact of workload variance and cloud elasticity, and certain cost factors that are
subjective in nature.
6. Error-Tolerant Resource Allocation and Payment Minimization for Cloud
System
Abstract :
With virtual machine (VM) technology being increasingly mature, compute resources in
cloud systems can be partitioned in fine granularity and allocated on demand. We make three
contributions in this paper: 1) We formulate a deadline-driven resource allocation problem based
on the cloud environment facilitated with VM resource isolation technology, and also propose a
novel solution with polynomial time, which could minimize users’ payment in terms of their
expected deadlines. 2) By analyzing the upper bound of task execution length based on the
possibly inaccurate workload prediction, we further propose an error-tolerant method to
guarantee task’s completion within its deadline. 3) We validate its effectiveness over a real VM-
facilitated cluster environment under different levels of competition. In our experiment, by
tuning algorithmic input deadline based on our derived bound, task execution length can always
be limited within its deadline in the sufficient-supply situation; the mean execution length still
keeps 70 percent as high as userspecified deadline under the severe competition. Under the
original-deadline-based solution, about 52.5 percent of tasks are completed within 0.95-1.0 as
high as their deadlines, which still conforms to the deadline-guaranteed requirement. Only 20
percent of tasks violate deadlines, yet most (17.5 percent) are still finished within 1.05 times of
deadlines.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 21 9344399918/26
7. Mona: Secure Multi-Owner Data Sharing for Dynamic Groups in the
Cloud
Abstract :
With the character of low maintenance, cloud computing provides an economical and
efficient solution for sharing group resource among cloud users. Unfortunately, sharing data in a
multi-owner manner while preserving data and identity privacy from an untrusted cloud is still a
challenging issue, due to the frequent change of the membership. In this paper, we propose a
secure multiowner data sharing scheme, named Mona, for dynamic groups in the cloud. By
leveraging group signature and dynamic broadcast encryption techniques, any cloud user can
anonymously share data with others. Meanwhile, the storage overhead and encryption
computation cost of our scheme are independent with the number of revoked users. In addition,
we analyze the security of our scheme with rigorous proofs, and demonstrate the efficiency of
our scheme in experiments.
8. A New Disk I/O Model of Virtualized Cloud Environment
Abstract :
In a traditional virtualized cloud environment, using asynchronous I/O in the guest file
system and synchronous I/O in the host file system to handle an asynchronous user disk write
exhibits several drawbacks, such as performance disturbance among different guests and
consistency maintenance across guest failures. To improve these issues, this paper introduces a
novel disk I/O model for virtualized cloud system called HypeGear, where the guest file system
uses synchronous operations to deal with the guest write request and the host file system
performs asynchronous operations to write the data to the hard disk. A prototype system is
implemented on the Xen hypervisor and our experimental results verify that this new model has
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 22 9344399918/26
many advantages over the conventional asynchronous-synchronous model. We also evaluate the
overhead of asynchronous I/O at host, which is brought by our new model. The result
demonstrates that it enforces little cost on host layer.
9. On Data Staging Algorithms for Shared Data Accesses in Clouds
Abstract :
In this paper, we study the strategies for efficiently achieving data staging and caching on a
set of vantage sites in a cloud system with a minimum cost. Unlike the traditional research, we
do not intend to identify the access patterns to facilitate the future requests. Instead, with such a
kind of information presumably known in advance, our goal is to efficiently stage the shared data
items to predetermined sites at advocated time instants to align with the patterns while
minimizing the monetary costs for caching and transmitting the requested data items. To this
end, we follow the cost and network models in [1] and extend the analysis to multiple data items,
each with single or multiple copies. Our results show that under homogeneous cost model, when
the ratio of transmission cost and caching cost is low, a single copy of each data item can
efficiently serve all the user requests. While in multicopy situation, we also consider the tradeoff
between the transmission cost and caching cost by controlling the upper bounds of transmissions
and copies. The upper bound can be given either on per-item basis or on all-item basis. We
present efficient optimal solutions based on dynamic programming techniques to all these cases
provided that the upper bound is polynomially bounded by the number of service requests and
the number of distinct data items. In addition to the homogeneous cost model, we also briefly
discuss this problem under a heterogeneous cost model with some simple yet practical
restrictions and present a 2-approximation algorithm to the general case. We validate our
findings by implementing a data staging solver, whereby conducting extensive simulation studies
on the behaviors of the algorithms.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 23 9344399918/26
10. Dynamic Optimization of Multiattribute Resource Allocation in Self-
Organizing Clouds
Abstract :
By leveraging virtual machine (VM) technology which provides performance and fault isolation,
cloud resources can be provisioned on demand in a fine grained, multiplexed manner rather than
in monolithic pieces. By integrating volunteer computing into cloud architectures, we envision a
gigantic self-organizing cloud (SOC) being formed to reap the huge potential of untapped
commodity computing power over the Internet. Toward this new architecture where each
participant may autonomously act as both resource consumer and provider, we propose a fully
distributed, VM-multiplexing resource allocation scheme to manage decentralized resources. Our
approach not only achieves maximized resource utilization using the proportional share model
(PSM), but also delivers provably and adaptively optimal execution efficiency. We also design a
novel multiattribute range query protocol for locating qualified nodes. Contrary to existing
solutions which often generate bulky messages per request, our protocol produces only one
lightweight query message per task on the Content Addressable Network (CAN). It works
effectively to find for each task its qualified resources under a randomized policy that mitigates
the contention among requesters. We show the SOC with our optimized algorithms can make an
improvement by 15-60 percent in system throughput than a P2P Grid model. Our solution also
exhibits fairly high adaptability in a dynamic node-churning environment.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 24 9344399918/26
11. Scalable and Secure Sharing of Personal Health Records in Cloud
Computing Using Attribute-Based Encryption
Abstract :
Personal health record (PHR) is an emerging patient-centric model of health information
exchange, which is often outsourced to be stored at a third party, such as cloud providers.
However, there have been wide privacy concerns as personal health information could be
exposed to those third party servers and to unauthorized parties. To assure the patients’ control
over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing.
Yet, issues such as risks of privacy exposure, scalability in key management, flexible access, and
efficient user revocation, have remained the most important challenges toward achieving fine-
grained, cryptographically enforced data access control. In this paper, we propose a novel
patient-centric framework and a suite of mechanisms for data access control to PHRs stored in
semitrusted servers. To achieve fine-grained and scalable data access control for PHRs, we
leverage attribute-based encryption (ABE) techniques to encrypt each patient’s PHR file.
Different from previous works in secure data outsourcing, we focus on the multiple data owner
scenario, and divide the users in the PHR system into multiple security domains that greatly
reduces the key management complexity for owners and users. A high degree of patient privacy
is guaranteed simultaneously by exploiting multiauthority ABE. Our scheme also enables
dynamic modification of access policies or file attributes, supports efficient on-demand
user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical
and experimental results are presented which show the security, scalability, and efficiency of our
proposed scheme.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 25 9344399918/26
PARALLEL AND DISTRIBUTED SYSTEMS
1. A Truthful Dynamic Workflow Scheduling Mechanism for Commercial
Multicloud Environments
Abstract :
The ultimate goal of cloud providers by providing resources is increasing their revenues. This
goal leads to a selfish behavior that negatively affects the users of a commercial multicloud
environment. In this paper, we introduce a pricing model and a truthful mechanism for
scheduling single tasks considering two objectives: monetary cost and completion time. With
respect to the social cost of the mechanism, i.e., minimizing the completion time and monetary
cost, we extend the mechanism for dynamic scheduling of scientific workflows. We theoretically
analyze the truthfulness and the efficiency of the mechanism and present extensive experimental
results showing significant impact of the selfish behavior of the cloud providers on the efficiency
of the whole system. The experiments conducted using real-world and synthetic workflow
applications demonstrate that our solutions dominate in most cases the Pareto-optimal solutions
estimated by two classical multiobjective evolutionary algorithms.
2. Anchor: A Versatile and Efficient Framework for Resource Management
in the Cloud
Abstract :
We present Anchor, a general resource management architecture that uses the stable
matching framework to decouple policies from mechanisms when mapping virtual machines to
physical servers. In Anchor, clients and operators are able to express a variety of distinct
resource management policies as they deem fit, and these policies are captured as preferences in
the stable matching framework. The highlight of Anchor is a new many-to-one stable matching
theory that efficiently matches VMs with heterogeneous resource needs to servers, using both
offline and online algorithms. Our theoretical analyses show the convergence and optimality of
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 26 9344399918/26
the algorithm. Our experiments with a prototype implementation on a 20-node server cluster, as
well as large-scale simulations based on real-world workload traces, demonstrate that the
architecture is able to realize a diverse set of policy objectives with good performance and
practicality.
3. A Highly Practical Approach toward Achieving Minimum Data Sets
Storage Cost in the Cloud
Abstract :
Massive computation power and storage capacity of cloud computing systems allow
scientists to deploy computation and data intensive applications without infrastructure
investment, where large application data sets can be stored in the cloud. Based on the pay-as-
you-go model, storage strategies and benchmarking approaches have been developed for cost-
effectively storing large volume of generated application data sets in the cloud. However, they
are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this
paper, toward achieving the minimum cost benchmark, we propose a novel highly costeffective
and practical storage strategy that can automatically decide whether a generated data set should
be stored or not at runtime in the cloud. The main focus of this strategy is the local-optimization
for the tradeoff between computation and storage, while secondarily also taking users’ (optional)
preferences on storage into consideration. Both theoretical analysis and simulations conducted on
general (random) data sets as well as specific real world applications with Amazon’s cost model
show that the costeffectiveness of our strategy is close to or even the same as the minimum cost
benchmark, and the efficiency is very high for practical runtime utilization in the cloud.
4. Toward Fine-Grained, Unsupervised, Scalable Performance Diagnosis for
Production Cloud Computing Systems
Abstract :
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 27 9344399918/26
Performance diagnosis is labor intensive in production cloud computing systems. Such
systems typically face many realworld challenges, which the existing diagnosis techniques for
such distributed systems cannot effectively solve. An efficient, unsupervised diagnosis tool for
locating fine-grained performance anomalies is still lacking in production cloud computing
systems. This paper proposes CloudDiag to bridge this gap. Combining a statistical technique
and a fast matrix recovery algorithm, CloudDiag can efficiently pinpoint fine-grained causes of
the performance problems, which does not require any domain-specific knowledge to the target
system. CloudDiag has been applied in a practical production cloud computing systems to
diagnose performance problems. We demonstrate the effectiveness of CloudDiag in three real-
world case studies.
5. Scalable and Accurate Graph Clustering and Community Structure
Detection
Abstract :
One of the most useful measures of cluster quality is the modularity of the partition, which
measures the difference between the number of the edges joining vertices from the same cluster
and the expected number of such edges in a random graph. In this paper, we show that the
problem of finding a partition maximizing the modularity of a given graph G can be reduced to a
minimum weighted cut (MWC) problem on a complete graph with the same vertices as G. We
then show that the resulting minimum cut problem can be efficiently solved by adapting existing
graph partitioning techniques. Our algorithm finds clusterings of a comparable quality and is
much faster than the existing clustering algorithms.
6. Load Rebalancing for Distributed File Systems in Clouds
Abstract :
Distributed file systems are key building blocks for cloud computing applications based on
the MapReduce programming paradigm. In such file systems, nodes simultaneously serve
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 28 9344399918/26
computing and storage functions; a file is partitioned into a number of chunks allocated in
distinct nodes so that MapReduce tasks can be performed in parallel over the nodes. However, in
a cloud computing environment, failure is the norm, and nodes may be upgraded, replaced, and
added in the system. Files can also be dynamically created, deleted, and appended. This results in
load imbalance in a distributed file system; that is, the file chunks are not distributed as
uniformly as possible among the nodes. Emerging distributed file systems in production systems
strongly depend on a central node for chunk reallocation. This dependence is clearly inadequate
in a large-scale, failure-prone environment because the central load balancer is put under
considerable workload that is linearly scaled with the system size, and may thus become the
performance bottleneck and the single point of failure. In this paper, a fully distributed load
rebalancing algorithm is presented to cope with the load imbalance problem. Our algorithm is
compared against a centralized approach in a production system and a competing distributed
solution presented in the literature. The simulation results indicate that our proposal is
comparable with the existing centralized approach and considerably outperforms the prior
distributed algorithm in terms of load imbalance factor, movement cost, and algorithmic
overhead. The performance of our proposal implemented in the Hadoop distributed file system is
further investigated in a cluster environment.
7. SPOC: A Secure and Privacy-Preserving Opportunistic Computing
Framework for Mobile-Healthcare Emergency
Abstract :
With the pervasiveness of smart phones and the advance of wireless body sensor networks
(BSNs), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider
into a pervasive environment for better health monitoring, has attracted considerable interest
recently. However, the flourish of m-Healthcare still faces many challenges including
information security and privacy preservation. In this paper, we propose a secure and privacy-
preserving opportunistic computing framework, called SPOC, for m-Healthcare emergency.
With SPOC, smart phone resources including computing power and energy can be
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 29 9344399918/26
opportunistically gathered to process the computing-intensive personal health information (PHI)
during m-Healthcare emergency with minimal privacy disclosure. In specific, to leverage the
PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare
emergency, we introduce an efficient user-centric privacy access control in SPOC framework,
which is based on an attribute-based access control and a new privacy-preserving scalar product
computation (PPSPC) technique, and allows a medical user to decide who can participate in the
opportunistic computing to assist in processing his overwhelming PHI data. Detailed security
analysis shows that the proposed SPOC framework can efficiently achieve user-centric privacy
access control in m- Healthcare emergency. In addition, performance evaluations via extensive
simulations demonstrate the SPOC’s effectiveness in term of providing high-reliable-PHI
process and transmission while minimizing the privacy disclosure during m-Healthcare
emergency.
8. Improve Efficiency and Reliability in Single-Hop WSNs with Transmit-
Only Nodes
Abstract :
Wireless Sensor Networks (WSNs) will play a significant role at the ―edge‖ of the future
―Internet of Things.‖ In particular, WSNs with transmit-only nodes are attracting more attention
due to their advantages in supporting applications requiring dense and long-lasting deployment at
a very low cost and energy consumption. However, the lack of receivers in transmit-only nodes
renders most existing MAC protocols invalid. Based on our previous study on WSNs with pure
transmit-only nodes, this work proposes a simple, yet cost effective and powerful single-hop
hybrid WSN cluster architecture that contains not only transmit-only nodes but also standard
nodes (with transceivers). Along with the hybrid architecture, this work also proposes a new
MAC layer protocol framework called Robust Asynchronous Resource Estimation (RARE) that
efficiently and reliably manages the densely deployed single-hop hybrid cluster in a self-
organized fashion. Through analysis and extensive simulations, the proposed framework is
shown to meet or exceed the needs of most applications in terms of the data delivery probability,
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 30 9344399918/26
QoS differentiation, system capacity, energy consumption, and reliability. To the best of our
knowledge, this work is the first that brings reliable scheduling to WSNs containing both
nonsynchronized transmit-only nodes and standard nodes.
9. Optimal Client-Server Assignment for Internet Distributed Systems
Abstract :
We investigate an underlying mathematical model and algorithms for optimizing the
performance of a class of distributed systems over the Internet. Such a system consists of a large
number of clients who communicate with each other indirectly via a number of intermediate
servers. Optimizing the overall performance of such a system then can be formulated as a client-
server assignment problem whose aim is to assign the clients to the servers in such a way to
satisfy some prespecified requirements on the communication cost and load balancing. We show
that 1) the total communication load and load balancing are two opposing metrics, and
consequently, their tradeoff is inherent in this class of distributed systems; 2) in general, finding
the optimal client-server assignment for some prespecified requirements on the total load and
load balancing is NP-hard, and therefore; 3) we propose a heuristic via relaxed convex
optimization for finding the approximate solution. Our simulation results indicate that the
proposed algorithm produces superior performance than other heuristics, including the popular
Normalized Cuts algorithm.
10. Fast Channel Zapping with Destination-Oriented Multicast for IP Video
Delivery
Abstract :
Channel zapping time is a critical quality of experience (QoE) metric for IP-based video
delivery systems such as IPTV. An interesting zapping acceleration scheme based on time-
shifted subchannels (TSS) was recently proposed, which can ensure a zapping delay bound as
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 31 9344399918/26
well as maintain the picture quality during zapping. However, the behaviors of the TSS-based
scheme have not been fully studied yet. Furthermore, the existing TSS-based implementation
adopts the traditional IP multicast, which is not scalable for a large-scale distributed system.
Corresponding to such issues, this paper makes contributions in two aspects. First, we resort to
theoretical analysis to understand the fundamental properties of the TSS-based service model.
We show that there exists an optimal subchannel data rate which minimizes the redundant traffic
transmitted over subchannels. Moreover, we reveal a start-up effect, where the existing operation
pattern in the TSS-based model could violate the zapping delay bound. With a solution proposed
to resolve the start-up effect, we rigorously prove that a zapping delay bound equal to the
subchannel time shift is guaranteed by the updated TSS-based model. Second, we propose a
destination-oriented-multicast (DOM) assisted zapping acceleration (DAZA) scheme for a
scalable TSS-based implementation, where a subscriber can seamlessly migrate from a
subchannel to the main channel after zapping without any control message exchange over the
network. Moreover, the subchannel selection in DAZA is independent of the zapping request
signaling delay, resulting in improved robustness and reduced messaging overhead in a
distributed environment. We implement DAZA in ns-2 and multicast an MPEG-4 video stream
over a practical network topology. Extensive simulation results are presented to demonstrate the
validity of our analysis and DAZA scheme.
11. Cluster-Based Certificate Revocation with Vindication Capability for
Mobile Ad Hoc Networks
Abstract :
Mobile ad hoc networks (MANETs) have attracted much attention due to their mobility and
ease of deployment. However, the wireless and dynamic natures render them more vulnerable to
various types of security attacks than the wired networks. The major challenge is to guarantee
secure network services. To meet this challenge, certificate revocation is an important integral
component to secure network communications. In this paper, we focus on the issue of certificate
revocation to isolate attackers from further participating in network activities. For quick and
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 32 9344399918/26
accurate certificate revocation, we propose the Cluster-based Certificate Revocation with
Vindication Capability (CCRVC) scheme. In particular, to improve the reliability of the scheme,
we recover the warned nodes to take part in the certificate revocation process; to enhance the
accuracy, we propose the threshold-based mechanism to assess and vindicate warned nodes as
legitimate nodes or not, before recovering them. The performances of our scheme are evaluated
by both numerical and simulation analysis. Extensive results demonstrate that the proposed
certificate revocation scheme is effective and efficient to guarantee secure communications in
mobile ad hoc networks.
12. A Secure Protocol for Spontaneous Wireless Ad Hoc Networks Creation
Abstract :
This paper presents a secure protocol for spontaneous wireless ad hoc networks which uses
an hybrid symmetric/ asymmetric scheme and the trust between users in order to exchange the
initial data and to exchange the secret keys that will be used to encrypt the data. Trust is based on
the first visual contact between users. Our proposal is a complete self-configured secure protocol
that is able to create the network and share secure services without any infrastructure. The
network allows sharing resources and offering new services among users in a secure
environment. The protocol includes all functions needed to operate without any external support.
We have designed and developed it in devices with limited resources. Network creation stages
are detailed and the communication, protocol messages, and network management are explained.
Our proposal has been implemented in order to test the protocol procedure and performance.
Finally, we compare the protocol with other spontaneous ad hoc network protocols in order to
highlight its features and we provide a security analysis of the system.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 33 9344399918/26
13. Dynamic Resource Allocation Using Virtual Machines for Cloud
Computing Environment
Abstract :
Cloud computing allows business customers to scale up and down their resource usage based
on needs. Many of the touted gains in the cloud model come from resource multiplexing through
virtualization technology. In this paper, we present a system that uses virtualization technology
to allocate data center resources dynamically based on application demands and support green
computing by optimizing the number of servers in use. We introduce the concept of ―skewness‖
to measure the unevenness in the multidimensional resource utilization of a server. By
minimizing skewness, we can combine different types of workloads nicely and improve the
overall utilization of server resources. We develop a set of heuristics that prevent overload in the
system effectively while saving energy used. Trace driven simulation and experiment results
demonstrate that our algorithm achieves good performance.
14. High Performance Resource Allocation Strategies for Computational
Economies
Abstract :
Utility computing models have long been the focus of academic research, and with the recent
success of commercial cloud providers, computation and storage is finally being realized as the
fifth utility. Computational economies are often proposed as an efficient means of resource
allocation, however adoption has been limited due to a lack of performance and high overheads.
In this paper, we address the performance limitations of existing economic allocation models by
defining strategies to reduce the failure and reallocation rate, increase occupancy and thereby
increase the obtainable utilization of the system. The high-performance resource utilization
strategies presented can be used by market participants without requiring dramatic changes to the
allocation protocol. The strategies considered include overbooking, advanced reservation, just-
in-time bidding, and using substitute providers for service delivery. The proposed strategies have
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 34 9344399918/26
been implemented in a distributed metascheduler and evaluated with respect to Grid and cloud
deployments. Several diverse synthetic workloads have been used to quantity both the
performance benefits and economic implications of these strategies.
15. A Privacy Leakage Upper Bound Constraint-Based Approach for Cost-
Effective Privacy Preserving of Intermediate Data Sets in Cloud
Abstract :
Cloud computing provides massive computation power and storage capacity which enable
users to deploy computation and data-intensive applications without infrastructure investment.
Along the processing of such applications, a large volume of intermediate data sets will be
generated, and often stored to save the cost of recomputing them. However, preserving the
privacy of intermediate data sets becomes a challenging problem because adversaries may
recover privacy-sensitive information by analyzing multiple intermediate data sets. Encrypting
ALL data sets in cloud is widely adopted in existing approaches to address this challenge. But
we argue that encrypting all intermediate data sets are neither efficient nor cost-effective because
it is very time consuming and costly for data-intensive applications to en/decrypt data sets
frequently while performing any operation on them. In this paper, we propose a novel upper
bound privacy leakage constraint-based approach to identify which intermediate data sets need to
be encrypted and which do not, so that privacy-preserving cost can be saved while the privacy
requirements of data holders can still be satisfied. Evaluation results demonstrate that the
privacy-preserving cost of intermediate data sets can be significantly reduced with our approach
over existing ones where all data sets are encrypted.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 35 9344399918/26
16. A Secure Payment Scheme with Low Communication and Processing
Overhead for Multihop Wireless Networks
Abstract :
We propose RACE, a report-based payment scheme for multihop wireless networks to
stimulate node cooperation, regulate packet transmission, and enforce fairness. The nodes submit
lightweight payment reports (instead of receipts) to the accounting center (AC) and temporarily
store undeniable security tokens called Evidences. The reports contain the alleged charges and
rewards without security proofs, e.g., signatures. The AC can verify the payment by investigating
the consistency of the reports, and clear the payment of the fair reports with almost no processing
overhead or cryptographic operations. For cheating reports, the Evidences are requested to
identify and evict the cheating nodes that submit incorrect reports. Instead of requesting the
Evidences from all the nodes participating in the cheating reports, RACE can identify the
cheating nodes with requesting few Evidences. Moreover, Evidence aggregation technique is
used to reduce the Evidences’ storage area. Our analytical and simulation results demonstrate
that RACE requires much less communication and processing overhead than the existing receipt-
based schemes with acceptable payment clearance delay and storage area. This is essential for
the effective implementation of a payment scheme because it uses micropayment and the
overhead cost should be much less than the payment value. Moreover, RACE can secure the
payment and precisely identify the cheating nodes without false accusations.
17. Mobi-Sync: Efficient Time Synchronization for Mobile Underwater Sensor
Networks
Abstract :
Time synchronization is an important requirement for many services provided by distributed
networks. A lot of time synchronization protocols have been proposed for terrestrial Wireless
Sensor Networks (WSNs). However, none of them can be directly applied to Underwater Sensor
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 36 9344399918/26
Networks (UWSNs). A synchronization algorithm forUWSNs must consider additional factors
such as long propagation delays from the use of acoustic communication and sensor node
mobility. These unique challenges make the accuracy of synchronization procedures for UWSNs
even more critical. Time synchronization solutions specifically designed for UWSNs are needed
to satisfy these new requirements. This paper proposes Mobi-Sync, a novel time synchronization
scheme for mobile underwater sensor networks. Mobi-Sync distinguishes itself from previous
approaches for terrestrial WSN by considering spatial correlation among the mobility patterns of
neighboring UWSNs nodes. This enables Mobi-Sync to accurately estimate the long dynamic
propagation delays. Simulation results show that Mobi-Sync outperforms existing schemes in
both accuracy and energy efficiency.
18. Detection and Localization of Multiple Spoofing Attackers in Wireless
Networks
Abstract :
Wireless spoofing attacks are easy to launch and can significantly impact the performance of
networks. Although the identity of a node can be verified through cryptographic authentication,
conventional security approaches are not always desirable because of their overhead
requirements. In this paper, we propose to use spatial information, a physical property associated
with each node, hard to falsify, and not reliant on cryptography, as the basis for 1) detecting
spoofing attacks; 2) determining the number of attackers when multiple adversaries
masquerading as the same node identity; and 3) localizing multiple adversaries. We propose to
use the spatial correlation of received signal strength (RSS) inherited from wireless nodes to
detect the spoofing attacks. We then formulate the problem of determining the number of
attackers as a multiclass detection problem. Cluster-based mechanisms are developed to
determine the number of attackers. When the training data are available, we explore using the
Support Vector Machines (SVM) method to further improve the accuracy of determining the
number of attackers. In addition, we developed an integrated detection and localization system
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 37 9344399918/26
that can localize the positions of multiple attackers. We evaluated our techniques through two
testbeds using both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real
office buildings. Our experimental results show that our proposed methods can achieve over 90
percent Hit Rate and Precision when determining the number of attackers. Our localization
results using a representative set of algorithms provide strong evidence of high accuracy of
localizing multiple adversaries.
KNOWLEDGE AND DATA ENGINEERING
1. Crowdsourced Trace Similarity with Smartphones
Abstract :
Smartphones are nowadays equipped with a number of sensors, such as WiFi, GPS,
accelerometers, etc. This capability allows smartphone users to easily engage in crowdsourced
computing services, which contribute to the solution of complex problems in a distributed
manner. In this work, we leverage such a computing paradigm to solve efficiently the following
problem: comparing a query trace Q against a crowd of traces generated and stored on
distributed smartphones. Our proposed framework, coined SmartTraceþ, provides an effective
solution without disclosing any part of the crowd traces to the query processor. SmartTraceþ,
relies on an in-situ data storage model and intelligent top-K query processing algorithms that
exploit distributed trajectory similarity measures, resilient to spatial and temporal noise, in order
to derive the most relevant answers to Q. We evaluate our algorithms on both synthetic and real
workloads. We describe our prototype system developed on the Android OS. The solution is
deployed over our own SmartLab testbed of 25 smartphones. Our study reveals that
computations over SmartTraceþ result in substantial energy conservation; in addition, results can
be computed faster than competitive approaches.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 38 9344399918/26
2. Incentive Compatible Privacy-Preserving Data Analysis
Abstract :
In many cases, competing parties who have private data may collaboratively conduct
privacy-preserving distributed data analysis (PPDA) tasks to learn beneficial data models or
analysis results. Most often, the competing parties have different incentives. Although certain
PPDA techniques guarantee that nothing other than the final analysis result is revealed, it is
impossible to verify whether participating parties are truthful about their private input data.
Unless proper incentives are set, current PPDA techniques cannot prevent participating parties
from modifying their private inputs. This raises the question of how to design incentive
compatible privacy-preserving data analysis techniques that motivate participating parties to
provide truthful inputs. In this paper, we first develop key theorems, then base on these
theorems, we analyze certain important privacy-preserving data analysis tasks that could be
conducted in a way that telling the truth is the best choice for any participating party.
3. On Identifying Critical Nuggets of Information during Classification Tasks
Abstract :
In large databases, there may exist critical nuggets—small collections of records or instances
that contain domain-specific important information. This information can be used for future
decision making such as labeling of critical, unlabeled data records and improving classification
results by reducing false positive and false negative errors. This work introduces the idea of
critical nuggets, proposes an innovative domain-independent method to measure criticality,
suggests a heuristic to reduce the search space for finding critical nuggets, and isolates and
validates critical nuggets from some real-world data sets. It seems that only a few subsets may
qualify to be critical nuggets, underlying the importance of finding them. The proposed
methodology can detect them. This work also identifies certain properties of critical nuggets and
provides experimental validation of the properties. Experimental results also helped validate that
critical nuggets can assist in improving classification accuracies in real-world data sets.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 39 9344399918/26
4. Failure-Aware Cascaded Suppression in Wireless Sensor Networks
Abstract :
Wireless sensor networks are widely used to continuously collect data from the environment.
Because of energy constraints on battery-powered nodes, it is critical to minimize
communication. Suppression has been proposed as a way to reduce communication by using
predictive models to suppress reporting of predictable data. However, in the presence of
communication failures, missing data are difficult to interpret because these could have been
either suppressed or lost in transmission. There is no existing solution for handling failures for
general, spatiotemporal suppression that uses cascading. While cascading further reduces
communication, it makes failure handling difficult, because nodes can act on incomplete or
incorrect information and in turn affect other nodes. We propose a cascaded suppression
framework that exploits both temporal and spatial data correlation to reduce communication, and
applies coding theory and Bayesian inference to recover missing data resulted from suppression
and communication failures. Experiment results show that cascaded suppression significantly
reduces communication cost and improves missing data recovery compared to existing
approaches.
5. Optimal Route Queries with Arbitrary Order Constraints
Abstract :
Given a set of spatial points DS, each of which is associated with categorical information,
e.g., restaurant, pub, etc., the optimal route query finds the shortest path that starts from the
query point (e.g., a home or hotel), and covers a user-specified set of categories (e.g., {pub,
restaurant, museum}). The user may also specify partial order constraints between different
categories, e.g., a restaurant must be visited before a pub. Previous work has focused on a special
case where the query contains the total order of all categories to be visited (e.g., museum !
restaurant ! pub). For the general scenario without such a total order, the only known solution
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 40 9344399918/26
reduces the problem to multiple, total-order optimal route queries. As we show in this paper, this
naı¨ve approach incurs a significant amount of repeated computations, and, thus, is not scalable
to large data sets. Motivated by this, we propose novel solutions to the general optimal route
query, based on two different methodologies, namely backward search and forward search. In
addition, we discuss how the proposed methods can be adapted to answer a variant of the optimal
route queries, in which the route only needs to cover a subset of the given categories. Extensive
experiments, using both real and synthetic data sets, confirm that the proposed solutions are
efficient and practical, and outperform existing methods by large margins.
6. Co-Occurrence-Based Diffusion for Expert Search on the Web
Abstract :
Expert search has been studied in different contexts, e.g., enterprises, academic communities.
We examine a general expert search problem: searching experts on the web, where millions of
webpages and thousands of names are considered. It has mainly two challenging issues: 1)
webpages could be of varying quality and full of noises; 2) The expertise evidences scattered in
webpages are usually vague and ambiguous. We propose to leverage the large amount of co-
occurrence information to assess relevance and reputation of a person name for a query topic.
The co-occurrence structure is modeled using a hypergraph, on which a heat diffusion based
ranking algorithm is proposed. Query keywords are regarded as heat sources, and a person name
which has strong connection with the query (i.e., frequently co-occur with query keywords and
co-occur with other names related to query keywords) will receive most of the heat, thus being
ranked high. Experiments on the ClueWeb09 web collection show that our algorithm is effective
for retrieving experts and outperforms baseline algorithms significantly. This work would be
regarded as one step toward addressing the more general entity search problem without
sophisticated NLP techniques.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 41 9344399918/26
7. Clustering Uncertain Data Based on Probability Distribution Similarity
Abstract :
Clustering on uncertain data, one of the essential tasks in mining uncertain data, posts
significant challenges on both modeling similarity between uncertain objects and developing
efficient computational methods. The previous methods extend traditional partitioning clustering
methods like k-means and density-based clustering methods like DBSCAN to uncertain data,
thus rely on geometric distances between objects. Such methods cannot handle uncertain objects
that are geometrically indistinguishable, such as products with the same mean but very different
variances in customer ratings. Surprisingly, probability distributions, which are essential
characteristics of uncertain objects, have not been considered in measuring similarity between
uncertain objects. In this paper, we systematically model uncertain objects in both continuous
and discrete domains, where an uncertain object is modeled as a continuous and discrete random
variable, respectively. We use the well-known Kullback-Leibler divergence to measure similarity
between uncertain objects in both the continuous and discrete cases, and integrate it into
partitioning and density-based clustering methods to cluster uncertain objects. Nevertheless, a
naı¨ve implementation is very costly. Particularly, computing exact KL divergence in the
continuous case is very costly or even infeasible. To tackle the problem, we estimate KL
divergence in the continuous case by kernel density estimation and employ the fast Gauss
transform technique to further speed up the computation. Our extensive experiment results verify
the effectiveness, efficiency, and scalability of our approaches.
8. PMSE: A Personalized Mobile Search Engine
Abstract :
We propose a personalized mobile search engine (PMSE) that captures the users' preferences
in the form of concepts by mining their clickthrough data. Due to the importance of location
information in mobile search, PMSE classifies these concepts into content concepts and location
concepts. In addition, users' locations (positioned by GPS) are used to supplement the location
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 42 9344399918/26
concepts in PMSE. The user preferences are organized in an ontology-based, multifacet user
profile, which are used to adapt a personalized ranking function for rank adaptation of future
search results. To characterize the diversity of the concepts associated with a query and their
relevances to the user's need, four entropies are introduced to balance the weights between the
content and location facets. Based on the client-server model, we also present a detailed
architecture and design for implementation of PMSE. In our design, the client collects and stores
locally the clickthrough data to protect privacy, whereas heavy tasks such as concept extraction,
training, and reranking are performed at the PMSE server. Moreover, we address the privacy
issue by restricting the information in the user profile exposed to the PMSE server with two
privacy parameters. We prototype PMSE on the Google Android platform. Experimental results
show that PMSE significantly improves the precision comparing to the baseline.
9. Discovering Temporal Change Patterns in the Presence of Taxonomies
Abstract :
Frequent itemset mining is a widely exploratory technique that focuses on discovering
recurrent correlations among data. The steadfast evolution of markets and business environments
prompts the need of data mining algorithms to discover significant correlation changes in order
to reactively suit product and service provision to customer needs. Change mining, in the context
of frequent itemsets, focuses on detecting and reporting significant changes in the set of mined
itemsets from one time period to another. The discovery of frequent generalized itemsets, i.e.,
itemsets that 1) frequently occur in the source data, and 2) provide a high-level abstraction of the
mined knowledge, issues new challenges in the analysis of itemsets that become rare, and thus
are no longer extracted, from a certain point. This paper proposes a novel kind of dynamic
pattern, namely the HIstory GENeralized Pattern (HIGEN), that represents the evolution of an
itemset in consecutive time periods, by reporting the information about its frequent
generalizations characterized by minimal redundancy (i.e., minimum level of abstraction) in case
it becomes infrequent in a certain time period. To address HIGEN mining, it proposes HIGEN
MINER, an algorithm that focuses on avoiding itemset mining followed by postprocessing by
exploiting a support-driven itemset generalization approach. To focus the attention on the
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 43 9344399918/26
minimally redundant frequent generalizations and thus reduce the amount of the generated
patterns, the discovery of a smart subset of HIGENs, namely the NONREDUNDANT HIGENs,
is addressed as well. Experiments performed on both real and synthetic datasets show the
efficiency and the effectiveness of the proposed approach as well as its usefulness in a real
application context.
10. Spatial Approximate String Search
Abstract :
This work deals with the approximate string search in large spatial databases. Specifically,
we investigate range queries augmented with a string similarity search predicate in both
euclidean space and road networks. We dub this query the spatial approximate string (SAS)
query. In euclidean space, we propose an approximate solution, the MHR-tree, which embeds
min-wise signatures into an R-tree. The min-wise signature for an index node u keeps a concise
representation of the union of q-grams from strings under the subtree of u. We analyze the
pruning functionality of such signatures based on the set resemblance between the query string
and the q-grams from the subtrees of index nodes. We also discuss how to estimate the
selectivity of a SAS query in euclidean space, for which we present a novel adaptive algorithm to
find balanced partitions using both the spatial and string information stored in the tree. For
queries on road networks, we propose a novel exact method, RSASSOL, which significantly
outperforms the baseline algorithm in practice. The RSASSOL combines the q-gram-based
inverted lists and the reference nodes based pruning. Extensive experiments on large real data
sets demonstrate the efficiency and effectiveness of our approaches.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 44 9344399918/26
11. Robust Module-Based Data Management
Abstract :
The current trend for building an ontology-based data management system (DMS) is to
capitalize on efforts made to design a preexisting well-established DMS (a reference system).
The method amounts to extracting from the reference DMS a piece of schema relevant to the
new application needs—a module—, possibly personalizing it with extra constraints w.r.t. the
application under construction, and then managing a data set using the resulting schema. In this
paper, we extend the existing definitions of modules and we introduce novel properties of
robustness that provide means for checking easily that a robust module-based DMS evolves
safely w.r.t. both the schema and the data of the reference DMS. We carry out our investigations
in the setting of description logics which underlie modern ontology languages, like RDFS, OWL,
and OWL2 from W3C. Notably, we focus on the DL-liteA dialect of the DL-lite family, which
encompasses the foundations of the QL profile of OWL2 (i.e., DL-liteR): the W3C
recommendation for efficiently managing large data sets.
12. Protecting Sensitive Labels in Social Network Data Anonymization
Abstract :
Privacy is one of the major concerns when publishing or sharing social network data for
social science research and business analysis. Recently, researchers have developed privacy
models similar to k-anonymity to prevent node reidentification through structure information.
However, even when these privacy models are enforced, an attacker may still be able to infer
one’s private information if a group of nodes largely share the same sensitive labels (i.e.,
attributes). In other words, the label-node relationship is not well protected by pure structure
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 45 9344399918/26
anonymization methods. Furthermore, existing approaches, which rely on edge editing or node
clustering, may significantly alter key graph properties. In this paper, we define a k-degree-l-
diversity anonymity model that considers the protection of structural information as well as
sensitive labels of individuals. We further propose a novel anonymization methodology based on
adding noise nodes. We develop a new algorithm by adding noise nodes into the original graph
with the consideration of introducing the least distortion to graph properties. Most importantly,
we provide a rigorous analysis of the theoretical bounds on the number of noise nodes added and
their impacts on an important graph property. We conduct extensive experiments to evaluate the
effectiveness of the proposed technique.
13. A Proxy-Based Approach to Continuous Location-Based Spatial Queries in
Mobile Environments
Abstract :
Caching valid regions of spatial queries at mobile clients is effective in reducing the number
of queries submitted by mobile clients and query load on the server. However, mobile clients
suffer from longer waiting time for the server to compute valid regions. We propose in this paper
a proxy-based approach to continuous nearest-neighbor (NN) and window queries. The proxy
creates estimated valid regions (EVRs) for mobile clients by exploiting spatial and temporal
locality of spatial queries. For NN queries, we devise two new algorithms to accelerate EVR
growth, leading the proxy to build effective EVRs even when the cache size is small. On the
other hand, we propose to represent the EVRs of window queries in the form of vectors, called
estimated window vectors (EWVs), to achieve larger estimated valid regions. This novel
representation and the associated creation algorithm result in more effective EVRs of window
queries. In addition, due to the distinct characteristics, we use separate index structures, namely
EVR-tree and grid index, for NN queries and window queries, respectively. To further increase
efficiency, we develop algorithms to exploit the results of NN queries to aid grid index growth,
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 46 9344399918/26
benefiting EWV creation of window queries. Similarly, the grid index is utilized to support NN
query answering and EVR updating. We conduct several experiments for performance
evaluation. The experimental results show that the proposed approach significantly outperforms
the existing proxy-based approaches.
14. A Fast Clustering-Based Feature Subset Selection Algorithm for High-
Dimensional Data
Abstract :
Feature selection involves identifying a subset of the most useful features that produces
compatible results as the original entire set of features. A feature selection algorithm may be
evaluated from both the efficiency and effectiveness points of view. While the efficiency
concerns the time required to find a subset of features, the effectiveness is related to the
quality of the subset of features. Based on these criteria, a fast clustering-based feature
selection algorithm (FAST) is proposed and experimentally evaluated in this paper. The FAST
algorithm works in two steps. In the first step, features are divided into clusters by using
graph-theoretic clustering methods. In the second step, the most representative feature that is
strongly related to target classes is selected from each cluster to form a subset of features.
Features in different clusters are relatively independent, the clustering-based strategy of FAST
has a high probability of producing a subset of useful and independent features. To ensure the
efficiency of FAST, we adopt the efficient minimum-spanning tree (MST) clustering method.
The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical
study. Extensive experiments are carried out to compare FAST and several representative
feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with
respect to four types of well-known classifiers, namely, the probabilitybased Naive Bayes, the
tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature
selection. The results, on 35 publicly available real-world high-dimensional image,
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 47 9344399918/26
microarray, and text data, demonstrate that the FAST not only produces smaller subsets of
features but also improves the performances of the four types of classifiers.
15. Ranking on Data Manifold with Sink Points
Abstract :
Ranking is an important problem in various applications, such as Information Retrieval (IR),
natural language processing, computational biology, and social sciences. Many ranking
approaches have been proposed to rank objects according to their degrees of relevance or
importance. Beyond these two goals, diversity has also been recognized as a crucial criterion in
ranking. Top ranked results are expected to convey as little redundant information as possible,
and cover as many aspects as possible. However, existing ranking approaches either take no
account of diversity, or handle it separately with some heuristics. In this paper, we introduce a
novel approach, Manifold Ranking with Sink Points (MRSPs), to address diversity as well as
relevance and importance in ranking. Specifically, our approach uses a manifold ranking process
over the data manifold, which can naturally find the most relevant and important data objects.
Meanwhile, by turning ranked objects into sink points on data manifold, we can effectively
prevent redundant objects from receiving a high rank. MRSP not only shows a nice convergence
property, but also has an interesting and satisfying optimization explanation. We applied MRSP
on two application tasks, update summarization and query recommendation, where diversity is of
great concern in ranking. Experimental results on both tasks present a strong empirical
performance of MRSP as compared to existing ranking approaches.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 48 9344399918/26
16. Tweet Analysis for Real-Time Event Detection and Earthquake Reporting
System Development
Abstract :
Twitter has received much attention recently. An important characteristic of Twitter is its
real-time nature. We investigate the real-time interaction of events such as earthquakes in Twitter
and propose an algorithm to monitor tweets and to detect a target event. To detect a target event,
we devise a classifier of tweets based on features such as the keywords in a tweet, the number of
words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the
target event that can find the center of the event location. We regard each Twitter user as a
sensor and apply particle filtering, which are widely used for location estimation. The particle
filter works better than other comparable methods for estimating the locations of target events.
As an application, we develop an earthquake reporting system for use in Japan. Because of the
numerous earthquakes and the large number of Twitter users throughout the country, we can
detect an earthquake with high probability (93 percent of earthquakes of Japan Meteorological
Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our
system detects earthquakes promptly and notification is delivered much faster than JMA
broadcast announcements.
17. Clustering Sentence-Level Text Using a Novel Fuzzy Relational Clustering
Algorithm
Abstract :
In comparison with hard clustering methods, in which a pattern belongs to a single cluster,
fuzzy clustering algorithms allow patterns to belong to all clusters with differing degrees of
membership. This is important in domains such as sentence clustering, since a sentence is likely
to be related to more than one theme or topic present within a document or set of documents.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 49 9344399918/26
However, because most sentence similarity measures do not represent sentences in a common
metric space, conventional fuzzy clustering approaches based on prototypes or mixtures of
Gaussians are generally not applicable to sentence clustering. This paper presents a novel fuzzy
clustering algorithm that operates on relational input data; i.e., data in the form of a square
matrix of pairwise similarities between data objects. The algorithm uses a graph representation of
the data, and operates in an Expectation-Maximization framework in which the graph centrality
of an object in the graph is interpreted as a likelihood. Results of applying the algorithm to
sentence clustering tasks demonstrate that the algorithm is capable of identifying overlapping
clusters of semantically related sentences, and that it is therefore of potential use in a variety of
text mining tasks. We also include results of applying the algorithm to benchmark data sets in
several other domains.
18. Distributed Processing of Probabilistic Top-k Queries in Wireless Sensor
Networks
Abstract :
In this paper, we introduce the notion of sufficient set and necessary set for distributed
processing of probabilistic top-k queries in cluster-based wireless sensor networks. These two
concepts have very nice properties that can facilitate localized data pruning in clusters.
Accordingly, we develop a suite of algorithms, namely, sufficient set-based (SSB), necessary set-
based (NSB), and boundary-based (BB), for intercluster query processing with bounded rounds
of communications. Moreover, in responding to dynamic changes of data distribution in the
network, we develop an adaptive algorithm that dynamically switches among the three proposed
algorithms to minimize the transmission cost. We show the applicability of sufficient set and
necessary set to wireless sensor networks with both two-tier hierarchical and tree-structured
network topologies. Experimental results show that the proposed algorithms reduce data
transmissions significantly and incur only small constant rounds of data communications. The
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 50 9344399918/26
experimental results also demonstrate the superiority of the adaptive algorithm, which achieves a
near-optimal performance under various conditions.
19. A Survey of XML Tree Patterns
Abstract :
With XML becoming a ubiquitous language for data interoperability purposes in various
domains, efficiently querying XML data is a critical issue. This has lead to the design of
algebraic frameworks based on tree-shaped patterns akin to the tree-structured data model of
XML. Tree patterns are graphic representations of queries over data trees. They are actually
matched against an input data tree to answer a query. Since the turn of the 21st century, an
astounding research effort has been focusing on tree pattern models and matching optimization
(a primordial issue). This paper is a comprehensive survey of these topics, in which we outline
and compare the various features of tree patterns. We also review and discuss the two main
families of approaches for optimizing tree pattern matching, namely pattern tree minimization
and holistic matching. We finally present actual tree pattern-based developments, to provide a
global overview of this significant research topic.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 51 9344399918/26
20. Automatic Semantic Content Extraction in Videos Using a Fuzzy Ontology
and Rule-Based Model
Abstract :
Recent increase in the use of video-based applications has revealed the need for extracting
the content in videos. Raw data and low-level features alone are not sufficient to fulfill the user
’s needs; that is, a deeper understanding of the content at the semantic level is required.
Currently, manual techniques, which are inefficient, subjective and costly in time and limit the
querying capabilities, are being used to bridge the gap between low-level representative features
and high-level semantic content. Here, we propose a semantic content extraction system that
allows the user to query and retrieve objects, events, and concepts that are extracted
automatically. We introduce an ontology-based fuzzy video semantic content model that uses
spatial/temporal relations in event and concept definitions. This metaontology definition
provides a wide-domain applicable rule construction standard that allows the user to construct an
ontology for a given domain. In addition to domain ontologies, we use additional rule definitions
(without using ontology) to lower spatial relation computation cost and to be able to define some
complex situations more effectively. The proposed framework has been fully implemented and
tested on three different domains. We have obtained satisfactory precision and recall rates for
object, event and concept extraction.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 52 9344399918/26
NETWORK AND SECURITY
1. A Distributed Control Law for Load Balancing in Content Delivery
Networks
Abstract :
In this paper, we face the challenging issue of defining and implementing an effective law for
load balancing in Content Delivery Networks (CDNs). We base our proposal on a formal study
of a CDN system, carried out through the exploitation of a fluid flow model characterization of
the network of servers. Starting from such characterization, we derive and prove a lemma about
the network queues equilibrium. This result is then leveraged in order to devise a novel
distributed and time-continuous algorithm for load balancing, which is also reformulated in a
time-discrete version. The discrete formulation of the proposed balancing law is eventually
discussed in terms of its actual implementation in a real-world scenario. Finally, the overall
approach is validated by means of simulations.
2. A Low-Complexity Congestion Control and Scheduling Algorithm for
Multihop Wireless Networks With Order-Optimal Per-Flow Delay
Abstract :
Quantifying the end-to-end delay performance in multihop wireless networks is a well-
known challenging problem. In this paper, we propose a new joint congestion control and
scheduling algorithm for multihop wireless networks with fixed-route flows operated under a
general interference model with interference degree . Our proposed algorithm not only achieves a
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 53 9344399918/26
provable throughput guarantee (which is close to at least of the system capacity region), but also
leads to explicit upper bounds on the end-to-end delay of every flow. Our end-to-end delay and
throughput bounds are in simple and closed forms, and they explicitly quantify the tradeoff
between throughput and delay of every flow. Furthermore, the per-flow end-to-end delay bound
increases linearly with the number of hops that the flow passes through, which is order-optimal
with respect to the number of hops. Unlike traditional solutions based on the back-pressure
algorithm, our proposed algorithm combines window-based flow control with a new rate-based
distributed scheduling algorithm. A key contribution of our work is to use a novel stochastic
dominance approach to bound the corresponding per-flow throughput and delay, which
otherwise are often intractable in these types of systems. Our proposed algorithm is fully
distributed and requires a low per-node complexity that does not increase with the network size.
Hence, it can be easily implemented in practice.
3. A Utility Maximization Framework for Fair and Efficient Multicasting in
Multicarrier Wireless Cellular Networks
Abstract :
Multicast/broadcast is regarded as an efficient technique for wireless cellular networks to
transmit a large volume of common data to multiple mobile users simultaneously. To guarantee
the quality of service for each mobile user in such single-hop multicasting, the base-station
transmitter usually adapts its data rate to the worst channel condition among all users in a
multicast group. On one hand, increasing the number of users in a multicast group leads to a
more efficient utilization of spectrum bandwidth, as users in the same group can be served
together. On the other hand, too many users in a group may lead to unacceptably low data rate at
which the base station can transmit. Hence, a natural question that arises is how to efficiently and
fairly transmit to a large number of users requiring the same message. This paper endeavors to
answer this question by studying the problem of multicasting over multicarriers in wireless
orthogonal frequency division multiplexing (OFDM) cellular systems. Using a unified utility
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 54 9344399918/26
maximization framework, we investigate this problem in two typical scenarios: namely, when
users experience roughly equal path losses and when they experience different path losses,
respectively. Through theoretical analysis, we obtain optimal multicast schemes satisfying
various throughput-fairness requirements in these two cases. In particular, we show that the
conventional multicast scheme is optimal in the equal-path-loss case regardless of the utility
function adopted. When users experience different path losses, the group multicast scheme,
which divides the users almost equally into many multicast groups and multicasts to different
groups of users over nonoverlapping subcarriers, is optimal.
4. ABC: Adaptive Binary Cuttings for Multidimensional Packet
Classification
Abstract :
Decision tree-based packet classification algorithms are easy to implement and allow the
tradeoff between storage and throughput. However, the memory consumption of these
algorithms remains quite high when high throughput is required. The Adaptive Binary Cuttings
(ABC) algorithm exploits another degree of freedom to make the decision tree adapt to the
geometric distribution of the filters. The three variations of the adaptive cutting procedure
produce a set of different-sized cuts at each decision step, with the goal to balance the
distribution of filters and to reduce the filter duplication effect. The ABC algorithm uses stronger
and more straightforward criteria for decision tree construction. Coupled with an efficient node
encoding scheme, it enables a smaller, shorter, and well-balanced decision tree. The hardware-
oriented implementation of each variation is proposed and evaluated extensively to demonstrate
its scalability and sensitivity to different configurations. The results show that the ABC
algorithm significantly outperforms the other decision tree-based algorithms. It can sustain more
than 10-Gb/s throughput and is the only algorithm among the existing well-known packet
classifi- cation algorithms that can compete with TCAMs in terms of the storage efficiency.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 55 9344399918/26
5. Achieving Efficient Flooding by Utilizing Link Correlation in Wireless
Sensor Networks
Abstract :
Although existing flooding protocols can provide efficient and reliable communication in
wireless sensor networks on some level, further performance improvement has been hampered
by the assumption of link independence, which requires costly acknowledgments (ACKs) from
every receiver. In this paper, we present collective flooding (CF), which exploits the link
correlation to achieve flooding reliability using the concept of collective ACKs. CF requires only
1-hop information at each node, making the design highly distributed and scalable with low
complexity. We evaluate CF extensively in real-world settings, using three different types of
testbeds: a single-hop network with 20 MICAz nodes, a multihop network with 37 nodes, and a
linear outdoor network with 48 nodes along a 326-m-long bridge. System evaluation and
extensive simulation show that CF achieves the same reliability as state-of-the-art solutions
while reducing the total number of packet transmission and the dissemination delay by 30%–
50% and 35%–50%, respectively.
6. An Empirical Interference Modeling for Link Reliability Assessment in
Wireless Networks
Abstract :
In recent years, it has been widely believed in the community that the link reliability is
strongly related to received signal strength indicator (RSSI) [or signal-to-interference-plus-noise
ratio (SINR)] and external interference makes it unpredictable, which is different from the
previous understanding that there is no tight relationship between the link reliability and RSSI
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 56 9344399918/26
(or SINR), but multipath fading causes the unpredictability. However, both cannot fully explain
why the unpredictability appears in the link state. In this paper, we unravel the following
questions: 1) What causes frame losses that are directly related to intermediate link states? 2) Is
RSSI or SINR a right criterion to represent the link reliability? 3) Is there a better measure to
assess the link reliability? We first configured a testbed for performing a real measurement study
to identify the causes of frame losses, and observed that link reliability depends on an intraframe
SINR distribution, not a single value of RSSI (or SINR). We also learned that an RSSI value is
not always a good indicator to estimate the link state. We then conducted a further investigation
on the intraframe SINR distribution and the relationship between the SINR and link reliability
with the ns-2 simulator. Based on these results, we finally propose an interference modeling
framework for estimating link states in the presence of wireless interferences. We envision that
the framework can be used for developing link-aware protocols to achieve their optimal
performance in a hostile wireless environment.
7. Back-Pressure-Based Packet-by-Packet Adaptive Routing in
Communication Networks
Abstract :
Back-pressure-based adaptive routing algorithms where each packet is routed along a
possibly different path have been extensively studied in the literature. However, such algorithms
typically result in poor delay performance and involve high implementation complexity. In this
paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure
algorithm. We decouple the routing and scheduling components of the algorithm by designing a
probabilistic routing table that is used to route packets to per-destination queues. The scheduling
decisions in the case of wireless networks are made using counters called shadow queues. The
results are also extended to the case of networks that employ simple forms of network coding. In
that case, our algorithm provides a low-complexity solution to optimally exploit the routing–
coding tradeoff.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 57 9344399918/26
8. Centralized and Distributed Protocols for Tracker-Based Dynamic Swarm
Management
Abstract :
With BitTorrent, efficient peer upload utilization is achieved by splitting contents into many
small pieces, each of which may be downloaded from different peers within the same swarm.
Unfortunately, piece and bandwidth availability may cause the file-sharing efficiency to degrade
in small swarms with few participating peers. Using extensive measurements, we identi- fied
hundreds of thousands of torrents with several small swarms for which reallocating peers among
swarms and/or modifying the peer behavior could significantly improve the system performance.
Motivated by this observation, we propose a centralized and a distributed protocol for dynamic
swarm management. The centralized protocol (CSM) manages the swarms of peers at minimal
tracker overhead. The distributed protocol (DSM) manages the swarms of peers while ensuring
load fairness among the trackers. Both protocols achieve their performance improvements by
identifying and merging small swarms and allow load sharing for large torrents. Our evaluations
are based on measurement data collected during eight days from over 700 trackers worldwide,
which collectively maintain state information about 2.8 million unique torrents. We find that
CSM and DSM can achieve most of the performance gains of dynamic swarm management.
These gains are estimated to be up to 40% on average for small torrents.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 58 9344399918/26
9. Combined Optimal Control of Activation and Transmission in Delay-
Tolerant Networks
Abstract :
Performance of a delay-tolerant network has strong dependence on the nodes participating in
data transportation. Such networks often face several resource constraints especially related to
energy. Energy is consumed not only in data transmission, but also in listening and in several
signaling activities. On one hand these activities enhance the system’s performance while on the
other hand, they consume a significant amount of energy even when they do not involve actual
node transmission. Accordingly, in order to use energy efficiently, one may have to limit not
only the amount of transmissions, but also the amount of nodes that are active at each time.
Therefore, we study two coupled problems: 1) the activation problem that determines when a
mobile will turn on in order to receive packets; and 2) the problemof regulating the beaconing.
We derive optimal energy management strategies by formulating the problem as an optimal
control one, which we then explicitly solve. We also validate our findings through extensive
simulations that are based on contact traces.
10. Complexity Analysis and Algorithm Design for Advance Bandwidth
Scheduling in Dedicated Networks
Abstract :
An increasing number of high-performance networks provision dedicated channels through
circuit switching or MPLS/GMPLS techniques to support large data transfer. The link
bandwidths in such networks are typically shared by multiple users through advance reservation,
resulting in varying bandwidth availability in future time. Developing efficient scheduling
algorithms for advance bandwidth reservation has become a critical task to improve the
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 59 9344399918/26
utilization of network resources and meet the transport requirements of application users. We
consider an exhaustive combination of different path and bandwidth constraints and formulate
four types of advance bandwidth scheduling problems, with the same objective to minimize the
data transfer end time for a given transfer request with a prespecified data size: 1) fixed path with
fixed bandwidth (FPFB); 2) fixed path with variable bandwidth (FPVB); 3) variable path with
fixed bandwidth (VPFB); and 4) variable path with variable bandwidth (VPVB). For VPFB and
VPVB, we further consider two subcases where the path switching delay is negligible or
nonnegligible. We propose an optimal algorithm for each of these scheduling problems except
for FPVB and VPVB with nonnegligible path switching delay, which are proven to be NP-
complete and nonapproximable, and then tackled by heuristics. The performance superiority of
these heuristics is verified by extensive experimental results in a large set of simulated networks
in comparison to optimal and greedy strategies.
11. Distortion-Aware Scalable Video Streaming to Multinetwork Clients
Abstract :
We consider the problem of scalable video streaming from a server to multinetwork clients
over heterogeneous access networks, with the goal of minimizing the distortion of the received
videos. This problem has numerous applications including: 1) mobile devices connecting to
multiple licensed and ISM bands, and 2) cognitive multiradio devices employing spectrum
bonding. In this paper, we ascertain how to optimally determine which video packets to transmit
over each access network. We present models to capture the network conditions and video
characteristics and develop an integer program for deterministic packet scheduling. Solving the
integer program exactly is typically not computationally tractable, so we develop heuristic
algorithms for deterministic packet scheduling, as well as convex optimization problems for
randomized packet scheduling. We carry out a thorough study of the tradeoff between
performance and computational complexity and propose a convex programming-based algorithm
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 60 9344399918/26
that yields good performance while being suitable for real-time applications. We conduct
extensive trace-driven simulations to evaluate the proposed algorithms using real network
conditions and scalable video streams. The simulation results show that the proposed convex
programming-based algorithm: 1) outperforms the rate control algorithms defined in the
Datagram Congestion Control Protocol (DCCP) by about 10–15 dB higher video quality; 2)
reduces average delivery delay by over 90% compared to DCCP; 3) results in higher average
video quality of 4.47 and 1.92 dB than the two developed heuristics; 4) runs efficiently, up to six
times faster than the best-performing heuristic; and 5) does indeed provide service differentiation
among users.
12. Efficient Algorithms for Neighbor Discovery in Wireless Networks
Abstract :
Neighbor discovery is an important first step in the initialization of a wireless ad hoc
network. In this paper, we design and analyze several algorithms for neighbor discovery in
wireless networks. Starting with a single-hop wireless network of nodes, we propose a ALOHA-
like neighbor discovery algorithm when nodes cannot detect collisions, and an order-optimal
receiver feedback-based algorithm when nodes can detect collisions. Our algorithms neither
require nodes to have a priori estimates of the number of neighbors nor synchronization between
nodes. Our algorithms allow nodes to begin execution at different time instants and to terminate
neighbor discovery upon discovering all their neighbors. We finally show that receiver feedback
can be used to achieve a running time, even when nodes cannot detect collisions. We then
analyze neighbor discovery in a general multihop setting. We establish an upper bound of on the
running time of the ALOHA-like algorithm, where denotes the maximum node degree in the
network and the total number of nodes. We also establish a lower bound of on the running time
of any randomized neighbor discovery algorithm. Our result thus implies that the ALOHA-like
algorithm is at most a factor worse than optimal.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 61 9344399918/26
13. Exploring the Design Space of Multichannel Peer-to-Peer Live Video
Streaming Systems
Abstract :
Most of the commercial peer-to-peer (P2P) video streaming deployments support hundreds
of channels and are referred to as multichannel systems. Recent research studies have proposed
specific protocols to improve the streaming quality for all channels by enabling cross-channel
cooperation among multiple channels. In this paper, we focus on the following fundamental
problems in designing cooperating multichannel systems: 1) what are the general characteristics
of existing and potential designs? and 2) under what circumstances should a particular design be
used to achieve the desired streaming quality with the lowest implementation complexity? To
answer the first question, we propose simple models based on linear programming and network-
flow graphs for three general designs, namely Naive Bandwidth allocation Approach (NBA),
Passive Channel-aware bandwidth allocation Approach (PCA), and Active Channel-aware
bandwidth allocation Approach (ACA), which provide insight into understanding the key
characteristics of cross-channel resource sharing. For the second question, we first develop
closed-form results for two-channel systems. Then, we use extensive numerical simulations to
compare the three designs for various peer population distributions, upload bandwidth
distributions, and channel structures. Our analytical and simulation results show that: 1) the NBA
design can rarely achieve the desired streaming quality in general cases; 2) the PCA design can
achieve the same performance as the ACA design in general cases; and 3) the ACA design
should be used for special applications.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 62 9344399918/26
14. Fast Transmission to Remote Cooperative Groups: A New Key
Management Paradigm
Abstract :
The problem of efficiently and securely broadcasting to a remote cooperative group occurs in
many newly emerging networks. A major challenge in devising such systems is to overcome the
obstacles of the potentially limited communication from the group to the sender, the
unavailability of a fully trusted key generation center, and the dynamics of the sender. The
existing key management paradigms cannot deal with these challenges effectively. In this paper,
we circumvent these obstacles and close this gap by proposing a novel key management
paradigm. The new paradigm is a hybrid of traditional broadcast encryption and group key
agreement. In such a system, each member maintains a single public/secret key pair. Upon
seeing the public keys of the members, a remote sender can securely broadcast to any intended
subgroup chosen in an ad hoc way. Following this model, we instantiate a scheme that is proven
secure in the standard model. Even if all the nonintended members collude, they cannot extract
any useful information from the transmitted messages. After the public group encryption key is
extracted, both the computation overhead and the communication cost are independent of the
group size. Furthermore, our scheme facilitates simple yet efficient member deletion/ addition
and flexible rekeying strategies. Its strong security against collusion, its constant overhead, and
its implementation friendliness without relying on a fully trusted authority render our protocol a
very promising solution to many applications.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 63 9344399918/26
15. Geographic Routing in -Dimensional Spaces With Guaranteed Delivery
and Low Stretch
Abstract :
Almost all geographic routing protocols have been designed for 2-D. We present a novel
geographic routing protocol, named Multihop Delaunay Triangulation (MDT), for 2-D, 3-D, and
higher dimensions with these properties: 1) guaranteed delivery for any connected graph of
nodes and physical links, and 2) low routing stretch from efficient forwarding of packets out of
local minima. The guaranteed delivery property holds for node locations specified by accurate,
inaccurate, or arbitrary coordinates. The MDT protocol suite includes a packet forwarding
protocol together with protocols for nodes to construct and maintain a distributed MDT for
routing. We present the performance of MDT protocols in 3-D and 4-D as well as performance
comparisons of MDT routing versus representative geographic routing protocols for nodes in 2-
D and 3-D. Experimental results show that MDT provides the lowest routing stretch in the
comparisons. Furthermore, MDT protocols are specially designed to handle churn, i.e., dynamic
topology changes due to addition and deletion of nodes and links. Experimental results show that
MDT’s routing success rate is close to 100% during churn, and node states converge quickly to a
correct MDT after churn.
16. ICTCP: Incast Congestion Control for TCP in Data-Center Networks
Abstract :
Transport Control Protocol (TCP) incast congestion happens in high-bandwidth and low-
latency networks when multiple synchronized servers send data to the same receiver in parallel.
For many important data-center applications such as MapReduce and Search, this many-to-one
traffic pattern is common. Hence TCP incast congestion may severely degrade their
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 64 9344399918/26
performances, e.g., by increasing response time. In this paper, we study TCP incast in detail by
focusing on the relationships between TCP throughput, round-trip time (RTT), and receive
window. Unlike previous approaches, which mitigate the impact of TCP incast congestion by
using a fine-grained timeout value, our idea is to design an Incast congestion Control for TCP
(ICTCP) scheme on the receiver side. In particular, our method adjusts the TCP receive window
proactively before packet loss occurs. The implementation and experiments in our testbed
demonstrate that we achieve almost zero timeouts and high goodput for TCP incast.
17. Optimal Content Placement for Peer-to-Peer Video-on-Demand Systems
Abstract :
In this paper, we address the problem of content placement in peer-to-peer (P2P) systems,
with the objective of maximizing the utilization of peers’ uplink bandwidth resources. We
consider system performance under a many-user asymptotic. We distinguish two scenarios,
namely ―Distributed Server Networks‖ (DSNs) for which requests are exogenous to the system,
and ―Pure P2P Networks‖ (PP2PNs) for which requests emanate from the peers themselves. For
both scenarios, we consider a loss network model of performance and determine asymptotically
optimal content placement strategies in the case of a limited content catalog. We then turn to an
alternative ―large catalog‖ scaling where the catalog size scales with the peer population. Under
this scaling, we establish that storage space per peer must necessarily grow unboundedly if
bandwidth utilization is to be maximized. Relating the system performance to properties of a
specific random graph model, we then identify a content placement strategy and a request
acceptance policy that jointly maximize bandwidth utilization, provided storage space per peer
grows unboundedly, although arbitrarily slowly, with system size.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 65 9344399918/26
18. Peer-Assisted Social Media Streaming with Social Reciprocity
Abstract :
Online video sharing and social networking are cross-pollinating rapidly in today’s Internet:
Online social network users are sharing more and more media contents among each other, while
online video sharing sites are leveraging social connections among users to promote their videos.
An intriguing development as it is, the operational challenge in previous video sharing systems
persists, i.e., the large server cost demanded for scaling of the systems. Peer-to-peer video
sharing could be a rescue, only if the video viewers’ mutual resource contribution has been fully
incentivized and efficiently scheduled. Exploring the unique advantages of a social network
based video sharing system, we advocate to utilize social reciprocities among peers with social
relationships for efficient contribution incentivization and scheduling, so as to enable high-
quality video streaming with low server cost. We exploit social reciprocity with two give-and-
take ratios at each peer: (1) peer contribution ratio (PCR), which evaluates the reciprocity level
between a pair of social friends, and (2) system contribution ratio (SCR), which records the give-
and-take level of the user to and from the entire system. We design efficient peer-to-peer
mechanisms for video streaming using the two ratios, where each user optimally decides which
other users to seek relay help from and help in relaying video streams, respectively, based on
combined evaluations of their social relationship and historical reciprocity levels. Our design
achieves effective incentives for resource contribution, load balancing among relay peers, as well
as efficient social-aware resource scheduling. We also discuss practical implementation and
implement our design in a prototype social media sharing system. Our extensive evaluations
based on PlanetLab experiments verify that high-quality large-scale social media sharing can be
achieved with conservative server costs.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 66 9344399918/26
19. Pricing-Based Decentralized Spectrum Access Control in Cognitive Radio
Networks
Abstract :
This paper investigates pricing-based spectrum access control in cognitive radio networks,
where primary users (PUs) sell the temporarily unused spectrum and secondary users (SUs)
compete via random access for such spectrum opportunities. Compared to existing market-based
approaches with centralized scheduling, pricing-based spectrum management with random
access provides a platform for SUs contending for spectrum access and is amenable to
decentralized implementation due to its low complexity. We focus on two market models, one
with a monopoly PU market and the other with a multiple-PU market. For the monopoly PU
market model, we devise decentralized pricing-based spectrum access mechanisms that enable
SUs to contend for channel usage. Specifically, we first consider SUs contending via slotted
Aloha. Since the revenue maximization problem therein is nonconvex, we characterize the
corresponding Pareto-optimal region and obtain a Pareto-optimal solution that maximizes the
SUs’ throughput subject to their budget constraints. To mitigate the spectrum underutilization
due to the ―price of contention,‖ we revisit the problem where SUs contend via CSMA, which
results in more efficient spectrum utilization and higher revenue. We then study the tradeoff
between the PU’s utility and its revenue when the PU’s salable spectrum is controllable. Next,
for the multiple-PU market model, we cast the competition among PUs as a three-stage
Stackelberg game, where each SU selects a PU’s channel to maximize its throughput. We
explore the existence and the uniqueness of Nash equilibrium, in terms of access prices and the
spectrum offered to SUs, and develop an iterative algorithm for strategy adaptation to achieve
the Nash equilibrium. Our findings reveal that there exists a unique Nash equilibrium when the
number of PUs is less than a threshold determined by the budgets and elasticity of SUs.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 67 9344399918/26
20. QoS Guarantees and Service Differentiation for Dynamic Cloud
Applications
Abstract :
Cloud elasticity allows dynamic resource provisioning in concert with actual application
demands. Feedback control approaches have been applied with success to resource allocation in
physical servers. However, cloud dynamics make the design of an accurate and stable resource
controller challenging, especially when application-level performance is considered as the
measured output. Application-level performance is highly dependent on the characteristics of
workload and sensitive to cloud dynamics. To address these challenges, we extend a selftuning
fuzzy control (STFC) approach, originally developed for response time assurance in web servers
to resource allocation in virtualized environments. We introduce mechanisms for adaptive output
amplification and flexible rule selection in the STFC approach for better adaptability and
stability. Based on the STFC, we further design a two-layer QoS provisioning framework,
DynaQoS, that supports adaptive multi-objective resource allocation and service differentiation.
We implement a prototype of DynaQoS on a Xen-based cloud testbed. Experimental results on
representative server workloads show that STFC outperforms popular controllers such as
Kalman filter, ARMA and, Adaptive PI in the control of CPU, memory, and disk bandwidth
resources under both static and dynamic workloads. Further results with multiple control
objectives and service classes demonstrate the effectiveness of DynaQoS in performance-power
control and service differentiation.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 68 9344399918/26
21. Quantifying and Verifying Reachability for Access Controlled Networks
Abstract :
Quantifying and querying network reachability is important for security monitoring and
auditing as well as many aspects of network management such as troubleshooting, maintenance,
and design. Although attempts to model network reachability have been made, feasible solutions
to computing network reachability have remained unknown. In this paper, we propose a suite of
algorithms for quantifying reachability based on network configurations [mainly Access Control
Lists (ACLs)] as well as solutions for querying network reachability. We present a network
reachability model that considers connectionless and connection- oriented transport protocols,
stateless and stateful routers/ firewalls, static and dynamic NAT, PAT, IP tunneling, etc. We
implemented the algorithms in our network reachability tool called Quarnet and conducted
experiments on a university network. Experimental results show that the offline computation of
reachability matrices takes a few hours, and the online processing of a reachability query takes
0.075 s on average.
22. Rake: Semantics Assisted Network-Based Tracing Framework
Abstract :
The ability to trace request execution paths is critical for diagnosing performance faults in
large-scale distributed systems. Previous black-box and white-box approaches are either
inaccurate or invasive. We present a novel semantics-assisted gray-box tracing approach, called
Rake, which can accurately trace individual request by observing network traffic. Rake infers the
causality between messages by identifying polymorphic IDs in messages according to
application semantics. To make Rake universally applicable, we design a Rake language so that
users can easily describe necessary semantics of their applications while reusing the core Rake
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 69 9344399918/26
component. We evaluate Rake using a few popular distributed applications, including web
search, distributed computing cluster, content provider network, and online chatting. Our results
demonstrate Rake is much more accurate than the black-box approaches while requiring no
modification to OS/applications. In the CoralCDN (a content distributed network) experiments,
Rake links messages with much higher accuracy than WAP5, a state-of-the-art blackbox
approach. In the Hadoop (a distributed computing cluster platform) experiments, Rake helps
reveal several previously unknown issues that may lead to performance degradation, including a
RPC (Remote Procedure Call) abusing problem.
23. Semi-Random Backoff: Towards Resource Reservation for Channel Access
in Wireless LANs
Abstract :
This paper proposes a semi-random backoff (SRB) method that enables resource reservation
in contention-based wireless LANs. The proposed SRB is fundamentally different from
traditional random backoff methods because it provides an easy migration path from random
backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station
to set its backoff counter to a deterministic value upon a successful packet transmission. This
deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles.
When multiple stations with successful packet transmissions reuse their respective time-slots, the
collision probability is reduced, and the channel achieves the equivalence of resource
reservation. In case of a failed packet transmission, a station will revert to the standard random
backoff method and probe for a new available time-slot. The proposed SRB method can be
readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to
the existing DCF/EDCA implementations. Theoretical analysis and simulation results validate
the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When
combined with an adaptive mechanism and a persistent backoff process, SRB can also be
effective for large-scale and lightly loaded wireless networks.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 70 9344399918/26
24. Throughput-Optimal Scheduling in Multihop Wireless Networks Without
Per-Flow Information
Abstract :
In this paper, we consider the problem of link scheduling in multihop wireless networks
under general interference constraints. Our goal is to design scheduling schemes that do not use
per-flow or per-destination information, maintain a single data queue for each link, and exploit
only local information, while guaranteeing throughput optimality. Although the celebrated back-
pressure algorithm maximizes throughput, it requires per-flow or per-destination information. It
is usually difficult to obtain and maintain this type of information, especially in large networks,
where there are numerous flows. Also, the back-pressure algorithm maintains a complex data
structure at each node, keeps exchanging queue-length information among neighboring nodes,
and commonly results in poor delay performance. In this paper, we propose scheduling schemes
that can circumvent these drawbacks and guarantee throughput optimality. These schemes use
either the readily available hop-count information or only the local information for each link. We
rigorously analyze the performance of the proposed schemes using fluid limit techniques via an
inductive argument and show that they are throughput-optimal. We also conduct simulations to
validate our theoretical results in various settings and show that the proposed schemes can
substantially improve the delay performance in most scenarios.
25. Delay-Based Network Utility Maximization
Abstract :
It is well known that max-weight policies based on a queue backlog index can be used to
stabilize stochastic networks, and that similar stability results hold if a delay index is used. Using
Lyapunov optimization, we extend this analysis to design a utility maximizing algorithm that
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 71 9344399918/26
uses explicit delay information from the head-of-line packet at each user. The resulting policy is
shown to ensure deterministic worst-case delay guarantees and to yield a throughput utility that
differs from the optimally fair value by an amount that is inversely proportional to the delay
guarantee. Our results hold for a general class of 1-hop networks, including packet switches and
multiuser wireless systems with time-varying reliability.
26. Topology Control for Effective Interference Cancellation in Multiuser
MIMO Networks
Abstract :
In multiuser multiple-input–multiple-output (MIMO) networks, receivers decode multiple
concurrent signals using successive interference cancellation (SIC). With SIC, a weak target
signal can be deciphered in the presence of stronger interfering signals. However, this is only
feasible if each strong interfering signal satisfies a signal-to-noise-plus-interference ratio (SINR)
requirement. This necessitates the appropriate selection of a subset of links that can be
concurrently active in each receiver’s neighborhood; in other words, a subtopology consisting of
links that can be simultaneously active in the network is to be formed. If the selected
subtopologies are of small size, the delay between the transmission opportunities on a link
increases. Thus, care should be taken to form a limited number of subtopologies. We find that
the problem of constructing the minimum number of subtopologies such that SIC decoding is
successful with a desired probability threshold is NP-hard. Given this, we propose MUSIC, a
framework that greedily forms and activates subtopologies in a way that favors successful SIC
decoding with a high probability. MUSIC also ensures that the number of selected subtopologies
is kept small. We provide both a centralized and a distributed version of our framework. We
prove that our centralized version approximates the optimal solution for the considered
problem.We also perform extensive simulations to demonstrate that: 1)MUSIC forms a small
number of subtopologies that enable efficient SIC operations; the number of subtopologies
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 72 9344399918/26
formed is at most 17% larger than the optimum number of topologies, discovered through
exhaustive search (in small networks); 2) MUSIC outperforms approaches that simply consider
the number of antennas as a measure for determining the links that can be simultaneously active.
Specifically, MUSIC provides throughput improvements of up to four times, as compared to
such an approach, in various topological settings. The improvements can be directly attributable
to a significantly higher probability of correct SIC based decoding with MUSIC.
27. Localization of Wireless Sensor Networks in the Wild: Pursuit of Ranging
Quality
Abstract :
Localization is a fundamental issue of wireless sensor networks that has been extensively
studied in the literature. Our real-world experience from GreenOrbs, a sensor network system
deployed in a forest, shows that localization in the wild remains very challenging due to various
interfering factors. In this paper, we propose CDL, a Combined and Differentiated Localization
approach for localization that exploits the strength of range-free approaches and range-based
approaches using received signal strength indicator (RSSI). A critical observation is that ranging
quality greatly impacts the overall localization accuracy. To achieve a better ranging quality, our
method CDL incorporates virtual-hop localization, local filtration, and ranging-quality aware
calibration. We have implemented and evaluated CDL by extensive real-world experiments in
GreenOrbs and large-scale simulations. Our experimental and simulation results demonstrate that
CDL outperforms current state-of-art localization approaches with a more accurate and
consistent performance. For example, the average location error using CDL in GreenOrbs system
is 2.9 m, while the previous best method SISR has an average error of 4.6 m.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 73 9344399918/26
28. Torrents on Twitter: Explore Long-Term Social Relationships in Peer-to-
Peer Systems
Abstract :
Peer-to-peer file sharing systems, most notably Bit- Torrent (BT), have achieved tremendous
success among Internet users. Recent studies suggest that the long-term relationships among BT
peers can be explored to enhance the downloading performance; for example, for re-sharing
previously downloaded contents or for effectively collaborating among the peers. However,
whether such relationships do exist in real world remains unclear. In this paper, we take a first
step towards the real-world applicability of peers’ long-term relationship through a measurement
based study. We find that 95% peers cannot even meet each other again in the BT networks;
therefore, most peers can hardly be organized for further cooperation. This result contradicts to
the conventional understanding based on the observed daily arrival pattern in peer-to-peer
networks. To better understand this, we revisit the arrival of BT peers as well as their longrange
dependence. We find that the peers’ arrival patterns are highly diverse; only a limited number of
stable peers have clear self-similar and periodic daily arrivals patterns. The arrivals of most peers
are, however, quite random with little evidence of long-range dependence. To better utilize these
stable peers, we start to explore peers’ long-term relationships in specific swarms instead of
conventional BT networks. Fortunately, we find that the peers in Twitter-initialized torrents have
stronger temporal locality, thus offering great opportunity for improving their degree of sharing.
Our PlanetLab experiments further indicate that the incorporation of social relations remarkably
accelerates the download completion time. The improvement remains noticeable even in a hybrid
system with a small set of social friends only.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 74 9344399918/26
AFFECTIVE COMPUTING
1. Predicting Emotional Responses to Long Informal Text
Abstract :
Most sentiment analysis approaches deal with binary or ordinal prediction of affective states
(e.g., positive versus negative) on review-related content from the perspective of the author. The
present work focuses on predicting the emotional responses of online communication in
nonreview social media on a real-valued scale on the two affective dimensions of valence and
arousal. For this, a new dataset is introduced, together with a detailed description of the process
that was followed to create it. Important phenomena such as correlations between different
affective dimensions and intercoder agreement are thoroughly discussed and analyzed. Various
methodologies for automatically predicting those states are also presented and evaluated. The
results show that the prediction of intricate emotional states is possible, obtaining at best a
correlation of 0.89 for valence and 0.42 for arousal with the human assigned assessments.
2. Analyses of a Multimodal Spontaneous Facial Expression Database
Abstract :
Creating a large and natural facial expression database is a prerequisite for facial expression
analysis and classification. It is, however, not only time consuming but also difficult to capture
an adequately large number of spontaneous facial expression images and their meanings because
no standard, uniform, and exact measurements are available for database collection and
annotation. Thus, comprehensive first-hand data analyses of a spontaneous expression database
may provide insight for future research on database construction, expression recognition, and
emotion inference. This paper presents our analyses of a multimodal spontaneous facial
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 75 9344399918/26
expression database of natural visible and infrared facial expressions (NVIE). First, the
effectiveness of emotion-eliciting videos in the database collection is analyzed with the mean
and variance of the subjects’ self-reported data. Second, an interrater reliability analysis of
raters’ subjective evaluations for apex expression images and sequences is conducted using
Kappa and Kendall’s coefficients. Third, we propose a matching rate matrix to explore the
agreements between displayed spontaneous expressions and felt affective states. Lastly, the
thermal differences between the posed and spontaneous facial expressions are analyzed using a
pairedsamples t-test. The results of these analyses demonstrate the effectiveness of our emotion-
inducing experimental design, the gender difference in emotional responses, and the coexistence
of multiple emotions/expressions. Facial image sequences are more informative than apex
images for both expression and emotion recognition. Labeling an expression image or sequence
with multiple categories together with their intensities could be a better approach than labeling
the expression image or sequence with one dominant category. The results also demonstrate both
the importance of facial expressions as a means of communication to convey affective states and
the diversity of the displayed manifestations of felt emotions. There are indeed some significant
differences between the temperature difference data of most posed and spontaneous facial
expressions, many of which are found in the forehead and cheek regions.
3. Facial Expression Recognition in the Encrypted Domain Based on Local
Fisher Discriminant Analysis
Abstract :
Facial expression recognition forms a critical capability desired by human-interacting
systems that aim to be responsive to variations in the human’s emotional state. Recent trends
toward cloud computing and outsourcing has led to the requirement for facial expression
recognition to be performed remotely by potentially untrusted servers. This paper presents a
system that addresses the challenge of performing facial expression recognition when the test
image is in the encrypted domain. More specifically, to the best of our knowledge, this is the first
known result that performs facial expression recognition in the encrypted domain. Such a system
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 76 9344399918/26
solves the problem of needing to trust servers since the test image for facial expression
recognition can remain in encrypted form at all times without needing any decryption, even
during the expression recognition process. Our experimental results on popular JAFFE and MUG
facial expression databases demonstrate that recognition rate of up to 95.24 percent can be
achieved even in the encrypted domain.
4. Modeling Arousal Phases in Daily Living Using Wearable Sensors
Abstract :
In this work, we introduce methods for studying psychological arousal in naturalistic daily
living. We present an activityaware arousal phase modeling approach that incorporates the
additional heart rate (AHR) algorithm to estimate arousal onsets (activations) in the presence of
physical activity (PA). In particular, our method filters spurious PA-induced activations from
AHR activations, e.g., caused by changes in body posture, using activity primitive patterns and
their distributions. Furthermore, our approach includes algorithms for estimating arousal duration
and intensity, which are key to arousal assessment. We analyzed the modeling procedure in a
participant study with 180 h of unconstrained daily life recordings using a multimodal wearable
system comprising two acceleration sensors, a heart rate monitor, and a belt computer. We show
how participants’ sensor-based arousal phase estimations can be evaluated in relation to daily
activity and self-report information. For example, participant-specific arousal was frequently
estimated during conversations and yielded highest intensities during office work. We believe
that our activity-aware arousal modeling can be used to investigate personal arousal
characteristics and introduce novel options for studying human behavior in daily living.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 77 9344399918/26
SECURE COMPUTING
1. A System for Timely and Controlled Information Sharing in Emergency
Situations
Abstract :
During natural disasters or emergency situations, an essential requirement for an effective
emergency management is the information sharing. In this paper, we present an access control
model to enforce controlled information sharing in emergency situations. An in-depth analysis of
the model is discussed throughout the paper, and administration policies are introduced to
enhance the model flexibility during emergencies. Moreover, a prototype implementation and
experiments results are provided showing the efficiency and scalability of the system.
2. WARNINGBIRD: A Near Real-Time Detection System For Suspicious
Urls In Twitter Stream
Abstract :
Twitter is prone to malicious tweets containing URLs for spam, phishing, and malware
distribution. Conventional Twitter spam detection schemes utilize account features such as the
ratio of tweets containing URLs and the account creation date, or relation features in the Twitter
graph. These detection schemes are ineffective against feature fabrications or consume much
time and resources. Conventional suspicious URL detection schemes utilize several features
including lexical features of URLs, URL redirection, HTML content, and dynamic behavior.
However, evading techniques such as time-based evasion and crawler evasion exist. In this
paper, we propose WARNINGBIRD, a suspicious URL detection system for Twitter. Our
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 78 9344399918/26
system investigates correlations of URL redirect chains extracted from several tweets. Because
attackers have limited resources and usually reuse them, their URL redirect chains frequently
share the same URLs. We develop methods to discover correlated URL redirect chains using the
frequently shared URLs and to determine their suspiciousness. We collect numerous tweets from
the Twitter public timeline and build a statistical classifier using them. Evaluation results show
that our classifier accurately and efficiently detects suspicious URLs. We also present
WARNINGBIRD as a near real-time system for classifying suspicious URLs in the Twitter
stream.
3. Location-Aware and Safer Cards: Enhancing RFID Security and Privacy
via Location Sensing
Abstract :
In this paper, we report on a new approach for enhancing security and privacy in certain
RFID applications whereby location or location-related information (such as speed) can serve as
a legitimate access context. Examples of these applications include access cards, toll cards, credit
cards, and other payment tokens. We show that location awareness can be used by both tags and
back-end servers for defending against unauthorized reading and relay attacks on RFID systems.
On the tag side, we design a location-aware selective unlocking mechanism using which tags can
selectively respond to reader interrogations rather than doing so promiscuously. On the server
side, we design a location-aware secure transaction verification scheme that allows a bank server
to decide whether to approve or deny a payment transaction and detect a specific type of relay
attack involving malicious readers. The premise of our work is a current technological
advancement that can enable RFID tags with low-cost location (GPS) sensing capabilities.
Unlike prior research on this subject, our defenses do not rely on auxiliary devices or require any
explicit user involvement.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 79 9344399918/26
4. Malware Clearance for Secure Commitment of OS-Level Virtual Machines
Abstract :
A virtual machine(VM) can be simply created upon use and disposed upon the completion of
the tasks or the detection of error. The disadvantage of this approach is that if there is no
malicious activity, the user has to redo all of the work in her actual workspace since there is no
easy way to commit (i.e., merge) only the benign updates within the VM back to the host
environment. In this work, we develop a VM commitment system called Secom to automatically
eliminate malicious state changes when merging the contents of an OS-level VM to the host.
Secom consists of three steps: grouping state changes into clusters, distinguishing between
benign and malicious clusters, and committing benign clusters. Secom has three novel features.
First, instead of relying on a huge volume of log data, it leverages OS-level information flow and
malware behavior information to recognize malicious changes. As a result, the approach imposes
a smaller performance overhead. Second, different from existing intrusion detection and
recovery systems that detect compromised OS objects one by one, Secom classifies objects into
clusters and then identifies malicious objects on a cluster by cluster basis. Third, to reduce the
false-positive rate when identifying malicious clusters, it simultaneously considers two malware
behaviors that are of different types and the origin of the processes that exhibit these behaviors,
rather than considers a single behavior alone as done by existing malware detection methods. We
have successfully implemented Secom on the feather-weight virtual machine system, a
Windows-based OS-level virtualization system. Experiments show that the prototype can
effectively eliminate malicious state changes while committing a VM with small performance
degradation. Moreover, compared with the commercial antimalware tools, the Secom prototype
has a smaller number of false negatives and thus can more thoroughly clean up malware side
effects. In addition, the number of false positives of the Secom prototype is also lower than that
achieved by the online behavior-based approach of the commercial tools.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 80 9344399918/26
5. Predicting Architectural Vulnerability on Multithreaded Processors under
Resource Contention and Sharing
Abstract :
Architectural vulnerability factor (AVF) characterizes a processor’s vulnerability to soft
errors. Interthread resource contention and sharing on a multithreaded processor (e.g., SMT,
CMP) shows nonuniform impact on a program’s AVF when it is co-scheduled with different
programs. However, measuring the AVF is extremely expensive in terms of hardware and
computation. This paper proposes a scalable two-level predictive mechanism capable of
predicting a program’s AVF on a SMT/CMP architecture from easily measured metrics.
Essentially, the first-level model correlates the AVF in a contention-free environment with
important performance metrics and the processor configuration, while the second-level model
captures the interthread resource contention and sharing via processor structures’ occupancies.
By utilizing the proposed scheme, we can accurately estimate any unseen program’s soft error
vulnerability under resource contention and sharing with any other program(s), on an arbitrarily
configured multithreaded processor. In practice, the proposed model can be used to find soft
error resilient thread-to-core scheduling for multithreaded processors.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 81 9344399918/26
6. SORT: A Self-Organizing Trust Model for Peer-to-Peer Systems
Abstract :
Open nature of peer-to-peer systems exposes them to malicious activity. Building trust
relationships among peers can mitigate attacks of malicious peers. This paper presents
distributed algorithms that enable a peer to reason about trustworthiness of other peers based on
past interactions and recommendations. Peers create their own trust network in their proximity
by using local information available and do not try to learn global trust information. Two
contexts of trust, service, and recommendation contexts, are defined to measure trustworthiness
in providing services and giving recommendations. Interactions and recommendations are
evaluated based on importance, recentness, and peer satisfaction parameters. Additionally,
recommender’s trustworthiness and confidence about a recommendation are considered while
evaluating recommendations. Simulation experiments on a file sharing application show that the
proposed model can mitigate attacks on 16 different malicious behavior models. In the
experiments, good peers were able to form trust relationships in their proximity and isolate
malicious peers.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 82 9344399918/26
IMAGE PROCESSING
1. General Framework to Histogram-Shifting-Based Reversible Data Hiding
Abstract :
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH).With HS-
based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit
the HS technique and present a general framework to construct HS-based RDH. By the proposed
framework, one can get a RDH algorithm by simply designing the so-called shifting and
embedding functions. Moreover, by taking specific shifting and embedding functions, we show
that several RDH algorithms reported in the literature are special cases of this general
construction. In addition, two novel and efficient RDH algorithms are also introduced to further
demonstrate the universality and applicability of our framework. It is expected that more
efficient RDH algorithms can be devised according to the proposed framework by carefully
designing the shifting and embedding functions.
2. Robust Ellipse Fitting Based on Sparse Combination of Data Points
Abstract :
Ellipse fitting is widely applied in the fields of computer vision and automatic industry
control, in which the procedure of ellipse fitting often follows the preprocessing step of edge
detection in the original image. Therefore, the ellipse fitting method also depends on the
accuracy of edge detection besides their own performance, especially due to the introduced
outliers and edge point errors from edge detection which will cause severe performance
degradation. In this paper, we develop a robust ellipse fitting method to alleviate the influence of
outliers. The proposed algorithm solves ellipse parameters by linearly combining a subset of
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 83 9344399918/26
(―more accurate‖) data points (formed from edge points) rather than all data points (which
contain possible outliers). In addition, considering that squaring the fitting residuals can magnify
the contributions of these extreme data points, our algorithm replaces it with the absolute
residuals to reduce this influence. Moreover, the norm of data point errors is bounded, and the
worst case performance optimization is formed to be robust against data point errors. The
resulting mixed l1–l2 optimization problem is further derived as a secondorder cone
programming one and solved by the computationally efficient interior-point methods. Note that
the fitting approach developed in this paper specifically deals with the overdetermined system,
whereas the current sparse representation theory is only applied to underdetermined systems.
Therefore, the proposed algorithm can be looked upon as an extended application and
development of the sparse representation theory. Some simulated and experimental examples are
presented to illustrate the effectiveness of the proposed ellipse fitting approach.
3. Computationally Tractable Stochastic Image Modeling Based on
Symmetric Markov Mesh Random Fields
Abstract :
In this paper, the properties of a new class of causal Markov random fields, named
symmetric Markov mesh random field, are initially discussed. It is shown that the symmetric
Markov mesh random fields from the upper corners are equivalent to the symmetric Markov
mesh random fields from the lower corners. Based on this new random field, a symmetric,
corner-independent, and isotropic image model is then derived which incorporates the
dependency of a pixel on all its neighbors. The introduced image model comprises the product of
several local 1D density and 2D joint density functions of pixels in an image thus making it
computationally tractable and practically feasible by allowing the use of histogram and joint
histogram approximations to estimate the model parameters. An image restoration application is
also presented to confirm the effectiveness of the model developed. The experimental results
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 84 9344399918/26
demonstrate that this new model provides an improved tool for image modeling purposes
compared to the conventional Markov random field models.
4. A Robust Method for Rotation Estimation Using Spherical Harmonics
Representation
Abstract :
This paper presents a robust method for 3D object rotation estimation using spherical
harmonics representation and the unit quaternion vector. The proposed method provides a
closed-form solution for rotation estimation without recurrence relations or searching for point
correspondences between two objects. The rotation estimation problem is casted as a
minimization problem, which finds the optimum rotation angles between two objects of interest
in the frequency domain. The optimum rotation angles are obtained by calculating the unit
quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical
harmonics coefficients using eigendecomposition technique. Our experimental results on
hundreds of 3D objects show that our proposed method is very accurate in rotation estimation,
robust to noisy data, missing surface points, and can handle intra-class variability between 3D
objects.
5. Detecting, Grouping, and Structure Inference for Invariant Repetitive
Patterns in Images
Abstract :
The efficient and robust extraction of invariant patterns from an image is a long-standing
problem in computer vision. Invariant structures are often related to repetitive or near-repetitive
patterns. The perception of repetitive patterns in an image is strongly linked to the visual
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 85 9344399918/26
interpretation and composition of textures. Repetitive patterns are products of both repetitive
structures as well as repetitive reflections or color patterns. In other words, patterns that exhibit
nearstationary behavior provide rich information about objects, their shapes, and their texture in
an image. In this paper, we propose a new algorithm for repetitive pattern detection and
grouping. The algorithm follows the classical region growing image segmentation scheme. It
utilizes a mean-shift-like dynamic to group local image patches into clusters. It exploits a
continuous joint alignment to: 1) match similar patches, and 2) refine the subspace grouping. We
also propose an algorithm for inferring the composition structure of the repetitive patterns. The
inference algorithm constructs a data-driven structural completion field, which merges the
detected repetitive patterns into specific global geometric structures. The result of higher level
grouping for image patterns can be used to infer the geometry of objects and estimate the general
layout of a crowded scene.
6. Action Recognition from Video Using Feature Covariance Matrices
Abstract :
We propose a general framework for fast and accurate recognition of actions in video using
empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are
computed from video to provide a localized description of the action, and subsequently
aggregated in an empirical covariance matrix to compactly represent the action. Two supervised
learning methods for action recognition are developed using feature covariance matrices.
Common to both methods is the transformation of the classification problem in the closed
convex cone of covariance matrices into an equivalent problem in the vector space of symmetric
matrices via the matrix logarithm. The first method applies nearest-neighbor classification using
a suitable Riemannian metric for covariance matrices. The second method approximates the
logarithm of a query covariance matrix by a sparse linear combination of the logarithms of
training covariance matrices. The action label is then determined from the sparse coefficients.
Both methods achieve state-of-the-art classification performance on several datasets, and are
robust to action variability, viewpoint changes, and low object resolution. The proposed
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 86 9344399918/26
framework is conceptually simple and has low storage and computational requirements making it
attractive for real-time implementation.
7. Local Directional Number Pattern for Face Analysis: Face and Expression
Recognition
Abstract :
This paper proposes a novel local feature descriptor, local directional number pattern (LDN),
for face analysis, i.e., face and expression recognition. LDN encodes the directional information
of the face’s textures (i.e., the texture’s structure) in a compact way, producing a more
discriminative code than current methods. We compute the structure of each micro-pattern with
the aid of a compass mask that extracts directional information, and we encode such information
using the prominent direction indices (directional numbers) and sign—which allows us to
distinguish among similar structural patterns that have different intensity transitions. We divide
the face into several regions, and extract the distribution of the LDN features from them. Then,
we concatenate these features into a feature vector, and we use it as a face descriptor. We
perform several experiments in which our descriptor performs consistently under illumination,
noise, expression, and time lapse variations. Moreover, we test our descriptor with different
masks to analyze its performance in different face analysis tasks.
8. Optimized 3D Watermarking for Minimal Surface Distortion
Abstract :
This paper proposes a new approach to 3D watermarking by ensuring the optimal
preservation of mesh surfaces. A new 3D surface preservation function metric is defined
consisting of the distance of a vertex displaced by watermarking to the original surface, to the
watermarked object surface as well as the actual vertex displacement. The proposed method is
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 87 9344399918/26
statistical, blind, and robust. Minimal surface distortion according to the proposed function
metric is enforced during the statistical watermark embedding stage using Levenberg–
Marquardt optimization method. A study of the watermark code crypto-security is provided for
the proposed methodology. According to the experimental results, the proposed methodology has
high robustness against the common mesh attacks while preserving the original object surface
during watermarking.
9. Robust Radial Face Detection for Omnidirectional Vision
Abstract :
Bio-inspired and non-conventional vision systems are highly researched topics. Among them,
omnidirectional vision systems have demonstrated their ability to significantly improve the
geometrical interpretation of scenes. However, few researchers have investigated how to perform
object detection with such systems. The existing approaches require a geometrical transformation
prior to the interpretation of the picture. In this paper, we investigate what must be taken into
account and how to process omnidirectional images provided by the sensor. We focus our
research on face detection and highlight the fact that particular attention should be paid to the
descriptors in order to successfully perform face detection on omnidirectional images. We
demonstrate that this choice is critical to obtaining high detection rates. Our results imply that the
adaptation of existing object-detection frameworks, designed for perspective images, should be
focused on the choice of appropriate image descriptors in the design of the object-detection
pipeline.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 88 9344399918/26
10. Noise Reduction Based on Partial-Reference, Dual-Tree Complex Wavelet
Transform Shrinkage
Abstract :
This paper presents a novel way to reduce noise introduced or exacerbated by image
enhancement methods, in particular algorithms based on the random spray sampling technique,
but not only. According to the nature of sprays, output images of spray-based methods tend to
exhibit noise with unknown statistical distribution. To avoid inappropriate assumptions on the
statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is
considered to be either free of noise or affected by non-perceivable levels of noise. Taking
advantage of the higher sensitivity of the human visual system to changes in brightness, the
analysis can be limited to the luma channel of both the non-enhanced and enhanced image. Also,
given the importance of directional content in human vision, the analysis is performed through
the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the
DTWCT allows for distinction of data directionality in the transform space. For each level of the
transform, the standard deviation of the non-enhanced image coefficients is computed across the
six orientations of the DTWCT, then it is normalized. The result is a map of the directional
structures present in the non-enhanced image. Said map is then used to shrink the coefficients of
the enhanced image. The shrunk coefficients and the coefficients from the non-enhanced image
are then mixed according to data directionality. Finally, a noise-reduced version of the enhanced
image is computed via the inverse transforms. A thorough numerical analysis of the results has
been performed in order to confirm the validity of the proposed approach.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 89 9344399918/26
11. Design of Low-Complexity High-Performance Wavelet Filters for Image
Analysis
Abstract :
This paper addresses the construction of a family of wavelets based on halfband polynomials.
An algorithm is proposed that ensures maximum zeros at ω = π for a desired length of analysis
and synthesis filters. We start with the coefficients of the polynomial (x + 1)n and then use a
generalized matrix formulation method to construct the filter halfband polynomial. The designed
wavelets are efficient and give acceptable levels of peak signal-to-noise ratio when used for
image compression. Furthermore, these wavelets give satisfactory recognition rates when used
for feature extraction. Simulation results show that the designed wavelets are effective and more
efficient than the existing standard wavelets.
12. Wavelet Bayesian Network Image Denoising
Abstract :
From the perspective of the Bayesian approach, the denoising problem is essentially a prior
probability modeling and estimation task. In this paper, we propose an approach that exploits a
hidden Bayesian network, constructed from wavelet coefficients, to model the prior probability
of the original image. Then, we use the belief propagation (BP) algorithm, which estimates a
coefficient based on all the coefficients of an image, as the maximum-a-posterior (MAP)
estimator to derive the denoised wavelet coefficients. We show that if the network is a spanning
tree, the standard BP algorithm can perform MAP estimation efficiently. Our experiment results
demonstrate that, in terms of the peak-signal-to-noise-ratio and perceptual quality, the proposed
approach outperforms state-of-the-art algorithms on several images, particularly in the textured
regions, with various amounts of white Gaussian noise.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 90 9344399918/26
13. Blur and Illumination Robust Face Recognition via Set-Theoretic
Characterization
Abstract :
We address the problem of unconstrained face recognition from remotely acquired images.
The main factors that make this problem challenging are image degradation due to blur, and
appearance variations due to illumination and pose. In this paper, we address the problems of
blur and illumination. We show that the set of all images obtained by blurring a given image
forms a convex set. Based on this settheoretic characterization, we propose a blur-robust
algorithm whose main step involves solving simple convex optimization problems. We do not
assume any parametric form for the blur kernels, however, if this information is available it can
be easily incorporated into our algorithm. Furthermore, using the lowdimensional model for
illumination variations, we show that the set of all images obtained from a face image by
blurring it and by changing the illumination conditions forms a bi-convex set. Based on this
characterization, we propose a blur and illuminationrobust algorithm. Our experiments on a
challenging real dataset obtained in uncontrolled settings illustrate the importance of jointly
modeling blur and illumination.
14. View-Based Discriminative Probabilistic Modeling for 3D Object Retrieval
and Recognition
Abstract :
In view-based 3D object retrieval and recognition, each object is described by multiple
views. A central problem is how to estimate the distance between two objects. Most conventional
methods integrate the distances of view pairs across two objects as an estimation of their
distance. In this paper, we propose a discriminative probabilistic object modeling approach. It
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 91 9344399918/26
builds probabilistic models for each object based on the distribution of its views, and the distance
between two objects is defined as the upper bound of the Kullback–Leibler divergence of the
corresponding probabilistic models. 3D object retrieval and recognition is accomplished based
on the distance measures. We first learn models for each object by the adaptation from a set of
global models with a maximum likelihood principle. A further adaption step is then performed to
enhance the discriminative ability of the models. We conduct experiments on the ETH 3D object
dataset, the National Taiwan University 3D model dataset, and the Princeton Shape Benchmark.
We compare our approach with different methods, and experimental results demonstrate the
superiority of our approach.
15. Context-Aware Sparse Decomposition for Image Denoising and Super-
Resolution
Abstract :
Image prior models based on sparse and redundant representations are attracting more and
more attention in the field of image restoration. The conventional sparsity-based methods
enforce sparsity prior on small image patches independently. Unfortunately, these works
neglected the contextual information between sparse representations of neighboring image
patches. It limits the modeling capability of sparsity-based image prior, especially when the
major structural information of the source image is lost in the following serious degradation
process. In this paper, we utilize the contextual information of local patches (denoted as context-
aware sparsity prior) to enhance the performance of sparsity-based restoration method. In
addition, a unified framework based on the markov random fields model is proposed to tune the
local prior into a global one to deal with arbitrary size images. An iterative numerical solution is
presented to solve the joint problem of model parameters estimation and sparse recovery.
Finally, the experimental results on image denoising and super-resolution demonstrate the
effectiveness and robustness of the proposed context-aware method.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 92 9344399918/26
16. Learning the Spherical Harmonic Features for 3-D Face Recognition
Abstract :
In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic
features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies
contained in spherical harmonics with different frequencies, thereby enabling the capture of both
gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D
FR techniques which are either holistic or feature based, using local features extracted from
distinctive points. First, 3-D face models are represented in a canonical representation, namely,
spherical depth map, by which SHF can be calculated. Then, considering the predictive
contribution of each SHF feature, especially in the presence of facial expression and occlusion,
feature selection methods are used to improve the predictive performance and provide faster and
more cost-effective predictors. Experiments have been carried out on three public 3-D face
datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial
expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed
method.
17. Rate-Distortion Optimized Rate Control for Depth Map-Based 3-D Video
Coding
Abstract :
In this paper, a novel rate control scheme with optimized bits allocation for the 3-D video
coding is proposed. First, we investigate the R-D characteristics of the texture and depth map of
the coded view, as well as the quality dependency between the virtual view and the coded view.
Second, an optimal bit allocation scheme is developed to allocate target bits for both the texture
and depth maps of different views. Meanwhile, a simplified model parameter estimation scheme
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 93 9344399918/26
is adopted to speed up the coding process. Finally, the experimental results on various 3-D video
sequences demonstrate that the proposed algorithm achieves excellent R-D efficiency and bit rate
accuracy compared to benchmark algorithms.
18. Adaptive Fingerprint Image Enhancement With Emphasis on
Preprocessing of Data
Abstract :
This article proposes several improvements to an adaptive fingerprint enhancement method
that is based on contextual filtering. The term adaptive implies that parameters of the method are
automatically adjusted based on the input fingerprint image. Five processing blocks comprise the
adaptive fingerprint enhancement method, where four of these blocks are updated in our
proposed system. Hence, the proposed overall system is novel. The four updated processing
blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the
preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used.
In the global analysis and matched filtering blocks, different forms of order statistical filters are
applied. These processing blocks yield an improved and new adaptive fingerprint image
processing method. The performance of the updated processing blocks is presented in the
evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS
software for fingerprint recognition on FVC databases.
19. Image Noise Level Estimation by Principal Component Analysis
Abstract :
The problem of blind noise level estimation arises in many image processing applications,
such as denoising, compression, and segmentation. In this paper, we propose a new noise level
estimation method on the basis of principal component analysis of image blocks. We show that
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 94 9344399918/26
the noise variance can be estimated as the smallest eigenvalue of the image block covariance
matrix. Compared with 13 existing methods, the proposed approach shows a good compromise
between speed and accuracy. It is at least 15 times faster than methods with similar accuracy, and
it is at least two times more accurate than other methods. Our method does not assume the
existence of homogeneous areas in the input image and, hence, can successfully process images
containing only textures.
20. LLSURE: Local Linear SURE-Based Edge-Preserving Image Filtering
Abstract :
In this paper, we propose a novel approach for performing high-quality edge-preserving
image filtering. Based on a local linear model and using the principle of Stein’s unbiased risk
estimate as an estimator for the mean squared error from the noisy image only, we derive a
simple explicit image filter which can filter out noise while preserving edges and fine-scale
details. Moreover, this filter has a fast and exact linear-time algorithm whose computational
complexity is independent of the filtering kernel size; thus, it can be applied to real time image
processing tasks. The experimental results demonstrate the effectiveness of the new filter for
various computer vision applications, including noise reduction, detail smoothing and
enhancement, high dynamic range compression, and flash/no- flash denoising.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 95 9344399918/26
21. Visually Lossless Encoding for JPEG2000
Abstract :
Due to exponential growth in image sizes, visually lossless coding is increasingly being
considered as an alternative to numerically lossless coding, which has limited compression
ratios. This paper presents a method of encoding color images in a visually lossless manner using
JPEG2000. In order to hide coding artifacts caused by quantization, visibility thresholds (VTs)
are measured and used for quantization of subband signals in JPEG2000. The VTs are
experimentally determined from statistically modeled quantization distortion, which is based on
the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting
VTs are adjusted for locally changing backgrounds through a visual masking model, and then
used to determine the minimum number of coding passes to be included in the final codestream
for visually lossless quality under the desired viewing conditions. Codestreams produced by this
scheme are fully JPEG2000 Part-I compliant.
22. Adaptive Markov Random Fields for Joint Unmixing and Segmentation of
Hyperspectral Images
Abstract :
Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of
decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with
their corresponding proportions (or abundances). Endmember extraction algorithms can be
employed for recovering the spectral signatures while abundances are estimated using an
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 96 9344399918/26
inversion step. Recent works have shown that exploiting spatial dependencies between image
pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to
model these spatial correlations and partition the image into multiple classes with homogeneous
abundances. This paper proposes to define the MRF sites using similarity regions. These regions
are built using a self-complementary area filter that stems from the morphological theory. This
kind of filter divides the original image into flat zones where the underlying pixels have the same
spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is
proposed to estimate the abundances, the class labels, the noise variance, and the corresponding
hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the
corresponding posterior distribution of the unknown parameters and hyperparameters.
Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of
the algorithm.
23. Efficient Image Classification via Multiple Rank Regression
Abstract :
The problem of image classification has aroused considerable research interest in the field of
image processing. Traditional methods often convert an image to a vector and then use a vector-
based classifier. In this paper, a novel multiple rank regression model (MRR) for matrix data
classification is proposed. Unlike traditional vector-based methods, we employ multiple-rank left
projecting vectors and right projecting vectors to regress each matrix data set to its label for each
category. The convergence behavior, initialization, computational complexity, and parameter
determination are also analyzed. Compared with vector-based regression methods, MRR
achieves higher accuracy and has lower computational complexity. Compared with traditional
supervised tensor-based methods, MRR performs better for matrix data classification. Promising
experimental results on face, object, and hand-written digit image classification tasks are
provided to show the effectiveness of our method.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 97 9344399918/26
24. Separable Markov Random Field Model and Its Applications in Low Level
Vision
Abstract :
This brief proposes a continuously-valued Markov random field (MRF) model with separable
filter bank, denoted as MRFSepa, which significantly reduces the computational complexity in
the MRF modeling. In this framework, we design a novel gradient-based discriminative learning
method to learn the potential functions and separable filter banks. We learn MRFSepa models
with 2-D and 3-D separable filter banks for the applications of gray-scale/color image denoising
and color image demosaicing. By implementing MRFSepa model on graphics processing unit,
we achieve real-time image denoising and fast image demosaicing with high-quality results.
WEB SERVICES
1. Effective Message-Sequence Generation for Testing BPEL Programs
Abstract :
With the popularity of Web Services and Service-Oriented Architecture (SOA), quality
assurance of SOA applications, such as testing, has become a research focus. Programs
implemented by the Business Process Execution Language for Web Services (WS-BPEL), which
can be used to compose partner Web Services into composite Web Services, are one popular
kind of SOA applications. The unique features of WS-BPEL programs bring new challenges into
testing. A test case for testing a WS-BPEL program is a sequence of messages that can be
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 98 9344399918/26
received by the WS-BPEL program under test. Previous research has not studied the challenges
of message-sequence generation induced by unique features of WS-BPEL as a new language. In
this paper, we present a novel methodology to generate effective message sequences for testing
WS-BPEL programs. To capture the order relationship in a message sequence and the constraints
on correlated messages imposed by WS-BPEL’s routing mechanism, we model the WS-BPEL
program under test as a message-sequence graph (MSG), and generate message sequences based
on MSG. We performed experiments for our method and two other techniques with six WS-
BPEL programs. The results show that the message sequences generated by using our method
can effectively expose faults in the WS-BPEL programs.
2. A Bayesian Network-Based Knowledge Engineering Framework for IT
Service Management
Abstract :
Service management is becoming more and more important within the area of IT
management. How to efficiently manage and organize service in complicated IT service
environments with frequent changes is a challenging issue. IT service and the related information
from different sources are characterized as diverse, incomplete, heterogeneous, and
geographically distributed. It is hard to consume these complicated services without knowledge
assistant. To address this problem, a systematic way (with proposed toolsets and process) is
proposed to tackle the challenges of acquisition, structuring, and refinement of structured
knowledge. An integrated knowledge process is developed to guarantee the whole engineering
procedure which utilizes Bayesian networks (BNs) as the knowledge model. This framework can
be successfully applied on key tasks in service management, such as problem determination and
change impact analysis, and a real example of Cisco VoIP system is introduced to show the
usefulness of this method.
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 99 9344399918/26
3. Personalized QoS-Aware Web Service Recommendation and Visualization
Abstract :
With the proliferation of web services, effective QoS-based approach to service
recommendation is becoming more and more important. Although service recommendation has
been studied in the recent literature, the performance of existing ones is not satisfactory, since 1)
previous approaches fail to consider the QoS variance according to users’ locations; and 2)
previous recommender systems are all black boxes providing limited information on the
performance of the service candidates. In this paper, we propose a novel collaborative filtering
algorithm designed for large-scale web service recommendation. Different from previous work,
our approach employs the characteristic of QoS and achieves considerable improvement on the
recommendation accuracy. To help service users better understand the rationale of the
recommendation and remove some of the mystery, we use a recommendation visualization
technique to show how a recommendation is grouped with other choices. Comprehensive
experiments are conducted using more than 1.5 million QoS records of real-world web service
invocations. The experimental results show the efficiency and effectiveness of our approach.
4. A Decentralized Service Discovery Approach on Peer-to-Peer Networks
Abstract :
Service-Oriented Computing (SOC) is emerging as a paradigm for developing distributed
applications. A critical issue of utilizing SOC is to have a scalable, reliable, and robust service
discovery mechanism. However, traditional service discovery methods using centralized
registries can easily suffer from problems such as performance bottleneck and vulnerability to
failures in large scalable service networks, thus functioning abnormally. To address these
problems, this paper proposes a peer-to-peer-based decentralized service discovery approach
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 100 9344399918/26
named Chord4S. Chord4S utilizes the data distribution and lookup capabilities of the popular
Chord to distribute and discover services in a decentralized manner. Data availability is further
improved by distributing published descriptions of functionally equivalent services to different
successor nodes that are organized into virtual segments in the Chord4S circle. Based on the
service publication approach, Chord4S supports QoS-aware service discovery. Chord4S also
supports service discovery with wildcard(s). In addition, the Chord routing protocol is extended
to support efficient discovery of multiple services with a single query. This enables late
negotiation of Service Level Agreements (SLAs) between service consumers and multiple
candidate service providers. The experimental evaluation shows that Chord4S achieves higher
data availability and provides efficient query with reasonable overhead.
5. A Two-Tiered On-Demand Resource Allocation Mechanism for VM-Based
Data Centers
Abstract :
In a shared virtual computing environment, dynamic load changes as well as different quality
requirements of applications in their lifetime give rise to dynamic and various capacity demands,
which results in lower resource utilization and application quality using the existing static
resource allocation. Furthermore, the total required capacities of all the hosted applications in
current enterprise data centers, for example, Google, may surpass the capacities of the platform.
In this paper, we argue that the existing techniques by turning on or off servers with the help of
virtual machine (VM) migration is not enough. Instead, finding an optimized dynamic resource
allocation method to solve the problem of on-demand resource provision for VMs is the key to
improve the efficiency of data centers. However, the existing dynamic resource allocation
methods only focus on either the local optimization within a server or central global
optimization, limiting the efficiency of data centers. We propose a two-tiered on-demand
resource allocation mechanism consisting of the local and global resource allocation with
feedback to provide on-demand capacities to the concurrent applications. We model the on-
TYPICAL SOFT TECHNOLOGIES
Contact : 044-43555140 Page 101 9344399918/26
demand resource allocation using optimization theory. Based on the proposed dynamic resource
allocation mechanism and model, we propose a set of on-demand resource allocation algorithms.
Our algorithms preferentially ensure performance of critical applications named by the data
center manager when resource competition arises according to the time-varying capacity
demands and the quality of applications. Using Rainbow, a Xen-based prototype we
implemented, we evaluate the VM-based shared platform as well as the two-tiered on-demand
resource allocation mechanism and algorithms. The experimental results show that Rainbow
without dynamic resource allocation (Rainbow-NDA) provides 26 to 324 percent improvements
in the application performance, as well as 26 percent higher average CPU utilization than
traditional service computing framework, in which applications use exclusive servers. The two-
tiered on-demand resource allocation further improves performance by 9 to 16 percent for those
critical applications, 75 percent of the maximum performance improvement, introducing up to 5
percent performance degradations to others, with 1 to 5 percent improvements in the resource
utilization in comparison with Rainbow-NDA.
Recommended