45
Cloud Computing NO PRJ TITLE ABSTRACT DOMAIN YOP 1 CloudMoV : Cloud- based Mobile Social TV The rapidly increasing power of personal mobile devices (smartphones, tablets, etc.) is providing much richer contents and social interactions to users on the move. This trend however is throttled by the limited battery lifetime of mobile devices and unstable wireless connectivity, making the highest possible quality of service experienced by mobile users not feasible. The recent cloud computing technology, with its rich resources to compensate for the limitations of mobile devices and connections, can potentially provide an ideal platform to support the desired mobile services. Tough challenges arise on how to effectively exploit cloud resources to facilitate mobile services, especially those with stringent interaction delay requirements. In this paper, we propose the design of a Cloud-based, novel Mobile sOcial tV system (CloudMoV). The system effectively utilizes both PaaS (Platform-as-a-Service) and IaaS (Infrastructure-as-a-Service) cloud services to offer the living-room experience of video watching to a group of disparate mobile users who can interact socially while sharing the video. To guarantee good streaming quality as experienced by the mobile users with time-varying wireless connectivity, we employ a surrogate for each user in the IaaS cloud for video downloading and social exchanges on behalf of the user. The surrogate performs efficient stream transcoding that matches the current connectivity quality of the mobile user. Given the battery life as a key performance bottleneck, we advocate the use of burst transmission from the surrogates to the mobile users, and carefully decide the burst size which can lead to high energy efficiency and streaming quality. Social interactions among the users, in terms of spontaneous textual exchanges, are effectively achieved by efficient designs of data storage with BigTable and dynamic handling of large volumes of concurrent messages in a typical PaaS cloud. These various designs for flexible transcoding c- pabilities, battery efficiency of mobile devices and spontaneous social interactivity together provide an ideal platform for mobile social TV services. We have implemented CloudMoV on Amazon EC2 and Google App Engine and verified its superior performance based on real-world experiments. Cloud Computin g 2013 2 An Improved Mutual Authenti cation In this paper, wehave propose a user authentication scheme for cloud computing. The proposed framework providesmutual authentication and session key agreement in cloud computing environment. The scheme executesin three phases such as server initialization phase, registration phase, authentication phase. Detailed security analyses have been made to validate Cloud Computin g 2013 #56, II Floor, Pushpagiri Complex, 17 th Cross 8 th Main, Opp Water Tank,Vijaynagar,Bangalore-560040. Website: www.citlprojects.com , Email ID: JAVA / J2EE PROJECTS – 2013 (Networking, Network-Security, Mobile Computing, Cloud Computing, Wireless Sensor Network, Datamining, Webmining, Artificial Intelligence, Vanet, Ad-Hoc

IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

Embed Size (px)

DESCRIPTION

CITL Tech Varsity, a leading institute for assisting academicians M.Tech / MS/ B.Tech / BE (EC, EEE, ETC, CS, IS, DCN, Power Electronics, Communication)/ MCA and BCA students in various Domains & Technologies from past several years. DOMAINS WE ASSIST HARDWARE: Embedded, Robotics, Quadcopter (Flying Robot), Biomedical, Biometric, Automotive, VLSI, Wireless (GSM,GPS, GPRS, RFID, Bluetooth, Zigbee), Embedded Android. SOFTWARE Cloud Computing, Mobile Computing, Wireless Sensor Network, Network Security, Networking, Wireless Network, Data Mining, Web mining, Data Engineering, Cyber Crime, Android for application development. SIMULATION: Image Processing, Power Electronics, Power Systems, Communication, Biomedical, Geo Science & Remote Sensing, Digital Signal processing, Vanets, Wireless Sensor network, Mobile ad-hoc networks TECHNOLOGIES WE WORK: Embedded (8051, PIC, ARM7, ARM9, Embd C), VLSI (Verilog, VHDL, Xilinx), Embedded Android JAVA / J2EE, XML, PHP, SOA, Dotnet, Java Android. Matlab and NS2 TRAINING METHODOLOGY 1. Train you on the technology as per the project requirement 2. IEEE paper explanation, Flow of the project, System Design. 3. Algorithm implementation & Explanation. 4. Project Execution & Demo. 5. Provide Documentation & Presentation of the project.

Citation preview

Page 1: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

Cloud ComputingNO PRJ

TITLEABSTRACT DOMAIN YOP

1 CloudMoV: Cloud-based Mobile Social TV

The rapidly increasing power of personal mobile devices (smartphones, tablets, etc.) is providing much richer contents and social interactions to users on the move. This trend however is throttled by the limited battery lifetime of mobile devices and unstable wireless connectivity, making the highest possible quality of service experienced by mobile users not feasible. The recent cloud computing technology, with its rich resources to compensate for the limitations of mobile devices and connections, can potentially provide an ideal platform to support the desired mobile services. Tough challenges arise on how to effectively exploit cloud resources to facilitate mobile services, especially those with stringent interaction delay requirements. In this paper, we propose the design of a Cloud-based, novel Mobile sOcial tV system (CloudMoV). The system effectively utilizes both PaaS (Platform-as-a-Service) and IaaS (Infrastructure-as-a-Service) cloud services to offer the living-room experience of video watching to a group of disparate mobile users who can interact socially while sharing the video. To guarantee good streaming quality as experienced by the mobile users with time-varying wireless connectivity, we employ a surrogate for each user in the IaaS cloud for video downloading and social exchanges on behalf of the user. The surrogate performs efficient stream transcoding that matches the current connectivity quality of the mobile user. Given the battery life as a key performance bottleneck, we advocate the use of burst transmission from the surrogates to the mobile users, and carefully decide the burst size which can lead to high energy efficiency and streaming quality. Social interactions among the users, in terms of spontaneous textual exchanges, are effectively achieved by efficient designs of data storage with BigTable and dynamic handling of large volumes of concurrent messages in a typical PaaS cloud. These various designs for flexible transcoding c- pabilities, battery efficiency of mobile devices and spontaneous social interactivity together provide an ideal platform for mobile social TV services. We have implemented CloudMoV on Amazon EC2 and Google App Engine and verified its superior performance based on real-world experiments.

Cloud Computing

2013

2 An Improved Mutual Authentication Framework for Cloud Computing

In this paper, wehave propose a user authentication scheme for cloud computing. The proposed framework providesmutual authentication and session key agreement in cloud computing environment. The scheme executesin three phases such as server initialization phase, registration phase, authentication phase. Detailed security analyses have been made to validate the efficiency of the scheme. Further, the scheme has the resistance to possible attacks in cloud computing.

Cloud Computing

2013

3    Utility-aware deferred load balancing in the cloud driven by dynamic pricing of electricity

Distributed computing resources in a cloud computing environment provides an opportunity to reduce energy and its cost by shifting loads in response to dynamically varying availability of energy. This variation in electrical power availability is represented in its dynamically changing price that can be used to drive workload deferral against performance requirements. But such deferral may cause user dissatisfaction. In this paper,we quantify the impact of deferral on user satisfaction andutilize flexibility from the service level agreements (SLAs) for deferral to adapt with dynamic price variation. We differentiate among the jobs based on their requirements for responsiveness and schedule them for energy saving while meeting deadlines and user satisfaction. Representing utility as decaying functionsalong with workload deferral, we make a balance between loss of user satisfaction and energy efficiency. We model delay as decaying functions and guarantee that no job violates the maximum deadline, and we minimize the overall energy cost. Our simulation on MapReduce traces show that energy consumption can be reduced by 15%, with such utility-aware deferred load balancing. We also found that considering utility as a decaying function gives better cost reduction than load balancing with a fixed deadline.

Cloud Computing

2013

#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank,Vijaynagar,Bangalore-560040.

Website: www.citlprojects.com, Email ID: [email protected],[email protected]: 9886173099 / 9986709224, PH : 080 -23208045 / 23207367

JAVA / J2EE PROJECTS – 2013(Networking, Network-Security, Mobile Computing, Cloud Computing, Wireless Sensor Network, Datamining,

Webmining, Artificial Intelligence, Vanet, Ad-Hoc Network)

Page 2: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

4  A new framework to integrate wireless sensor networks with cloud computing

Wireless Sensor Networks (WSN) has been a focus for research for several years. WSN enables novel and attractive solutions for information gathering across the spectrum of endeavour including transportation, business, health-care, industrial automation, and environmental monitoring. Despite these advances, the exponentially increasing data extracted from WSN is not getting adequate use due to the lack of expertise, time and money with which the data might be better explored and stored for future use. The next generation of WSN will benefit when sensor data is added to blogs, virtual communities, and social network applications. This transformation of data derived from sensor networks into a valuable resource for information hungry applications will benefit from techniques being developed for the emerging Cloud Computing technologies. Traditional High Performance Computing approaches may be replaced or find a place in data manipulation prior to the data being moved into the Cloud. In this paper, a novel framework is proposed to integrate the Cloud Computing model with WSN. Deployed WSN will be connected to the proposed infrastructure. Users request will be served via three service layers (IaaS, PaaS, SaaS) either from the archive, archive is made by collecting data periodically from WSN to Data Centres (DC), or by generating live query to corresponding sensor network.

Cloud Computing

2013

5   A packet marking approach to protect cloud environment against DDoS attacks

Cloud computing uses internet and remote servers for maintaining data and applications. It offers through internet the dynamic virtualized resources, bandwidth and on-demand software's to consumers and promises the distribution of many economical benefits among its adapters. It helps the consumers to reduce the usage of hardware, software license and system maintenance. Simple Object Access Protocol (SOAP) is the system that allows the communications interaction between different web services. SOAP messages are constructed using either HyperText Transport Protocol (HTTP) and/or Extensible Mark-up Language (XML). The new form of Distributed Denial of Service (DDoS) attacks that could potentially bring down a cloud web services through the use of HTTP and XML. Cloud computing suffers from major security threat problem by HTTP and XML Denial of Service (DoS) attacks. HX-DoS attack is a combination of HTTP and XML messages that are intentionally sent to flood and destroy the communication channel of the cloud service provider. To address the problem of HX-DoS attacks against cloud web services there is a need to distinguish between the legitimate and illegitimate messages. This can be done by using the rule set based detection, called CLASSIE and modulo marking method is used to avoid the spoofing attack. Reconstruct and Drop method is used to make decision and drop the packets on the victim side. It enables us to improve the reduction of false positive rate and increase the detection and filtering of DDoS attacks.

Cloud Computing

2013

6 C-MART: Benchmarking the Cloud Parallel and Distributed Systems

Cloud computing environments provide on-demand resource provisioning, allowing applications to elastically scale. However, application benchmarks currently being used to test cloud management systems are not designed for this purpose. This results in resource underprovisioning and quality-of-service (QoS) violations when systems tested using these benchmarks are deployed in production environments. We present C-MART, a benchmark designed to emulate a modern web application running in a cloud computing environment. It is designed using the cloud computing paradigm of elastic scalability at every application tier and utilizes modern web-based technologies such as HTML5, AJAX, jQuery, and SQLite. C-MART consists of a web application, client emulator, deployment server, and scaling API. The deployment server automatically deploys and configures the test environment in orders of magnitude less time than current benchmarks. The scaling API allows users to define and provision their own customized datacenter. The client emulator generates the web workload for the application by emulating complex and varied client behaviors, including decisions based on page content and prior history. We show that C-MART can detect problems in management systems that previous benchmarks fail to identify, such as an increase from 4.4 to 50 percent error in predicting server CPU utilization and resource underprovisioning in 22 percent of QoS measurements.

Cloud Computing

2013

7 Pre-emptive scheduling of on-line real time services with task migration for cloud computing

This paper presents a new scheduling approach to focus on providing a solution for online scheduling problem of real-time tasks using “Infrastructure as a Service” model offered by cloud computing. The real time tasks are scheduled pre-emptively with the intent of maximizing the total utility and efficiency. In traditional approach, the task is scheduled non- pre-emptively with two different types of Time Utility Functions (TUFs) - a profit time utility function and a penalty time utility function. The task with highest expected gain is executed. When a new task arrives with highest priority then it cannot be taken for execution until it completes the currently running task. Therefore the higher priority task is waiting for a longer time. This scheduling method sensibly aborts the task when it misses its deadline. Note that, before a task is aborted, it consumes system resources including network bandwidth, storage space and processing power. This leads to affect the overall system performance and response time of a task. In our approach, a preemptive online scheduling with task migration algorithm for cloud computing environment is proposed in order to minimize the response time and to improve the efficiency of the tasks. Whenever a task misses its deadline, it will be migrated the task to another virtual machine. This improves the overall system performance and maximizes the total utility. Our simulation results outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF) and an earlier scheduling approach based on the similar model.

Cloud Computing

2013

Page 3: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

8 Facial Expression Recognition in the Encrypted Domain Based on Local Fisher Discriminant Analysis

Facial expression recognition forms a critical capability desired by human-interacting systems that aim to be responsive to variations in the human's emotional state. Recent trends toward cloud computing and outsourcing has led to the requirement for facial expression recognition to be performed remotely by potentially untrusted servers. This paper presents a system that addresses the challenge of performing facial expression recognition when the test image is in the encrypted domain. More specifically, to the best of our knowledge, this is the first known result that performs facial expression recognition in the encrypted domain. Such a system solves the problem of needing to trust servers since the test image for facial expression recognition can remain in encrypted form at all times without needing any decryption, even during the expression recognition process. Our experimental results on popular JAFFE and MUG facial expression databases demonstrate that recognition rate of up to 95.24 percent can be achieved even in the encrypted domain.

Cloud Computing

2013

9  Optimistic fuzzy based signature identification in cloud using multimedia mining and analysis techniques

Client level security issues in the cloud computing became a major challenge in service access process in cloud environment. Day by day number of threats over the network increasing because of the huge demand for the cloud product and service. The existing authentication systems are unable to provide the sufficient security and user Identification techniques. The proposed scheme, Trying to provide the Optimistic user signature identification through mining analysis and also using Fuzzy logic based user classification module provide the sufficient security for the cloud service access. This scheme reduces the complexity involved in the key exchange process in cryptographic techniques. With the help of strong mining tools and fuzzy computations, trying to prove that proposed scheme will provide sufficient user classification and security.

Cloud Computing

2013

10  Toward Secure Multikeyword Top-k Retrieval over Encrypted Cloud Data

Cloud computing has emerging as a promising pattern for data outsourcing and high-quality data services. However, concerns of sensitive information on cloud potentially causes privacy problems. Data encryption protects data security to some extent, but at the cost of compromised efficiency. Searchable symmetric encryption (SSE) allows retrieval of encrypted data over cloud. In this paper, we focus on addressing data privacy issues using SSE. For the first time, we formulate the privacy issue from the aspect of similarity relevance and scheme robustness. We observe that server-side ranking based on order-preserving encryption (OPE) inevitably leaks data privacy. To eliminate the leakage, we propose a two-round searchable encryption (TRSE) scheme that supports top-$(k)$ multikeyword retrieval. In TRSE, we employ a vector space model and homomorphic encryption. The vector space model helps to provide sufficient search accuracy, and the homomorphic encryption enables users to involve in the ranking while the majority of computing work is done on the server side by operations only on ciphertext. As a result, information leakage can be eliminated and data security is ensured. Thorough security and performance analysis show that the proposed scheme guarantees high security and practical efficiency.

cloud computing

2013

11 Mona: Secure Multi-Owner Data Sharing for Dynamic Groups in the Cloud

With the character of low maintenance, cloud computing provides an economical and efficient solution for sharing group resource among cloud users. Unfortunately, sharing data in a multi-owner manner while preserving data and identity privacy from an untrusted cloud is still a challenging issue, due to the frequent change of the membership. In this paper, we propose a secure multi-owner data sharing scheme, named Mona, for dynamic groups in the cloud. By leveraging group signature and dynamic broadcast encryption techniques, any cloud user can anonymously share data with others. Meanwhile, the storage overhead and encryption computation cost of our scheme are independent with the number of revoked users. In addition, we analyze the security of our scheme with rigorous proofs, and demonstrate the efficiency of our scheme in experiments.

Cloud Computing

2013

12  Optimizing Cloud Resources for Delivering IPTV Services Through Virtualization

Virtualized cloud-based services can take advantage of statistical multiplexing across applications to yield significant cost savings. However, achieving similar savings with real-time services can be a challenge. In this paper, we seek to lower a provider's costs for real-time IPTV services through a virtualized IPTV architecture and through intelligent time-shifting of selected services. Using Live TV and Video-on-Demand (VoD) as examples, we show that we can take advantage of the different deadlines associated with each service to effectively multiplex these services. We provide a generalized framework for computing the amount of resources needed to support multiple services, without missing the deadline for any service. We construct the problem as an optimization formulation that uses a generic cost function. We consider multiple forms for the cost function (e.g., maximum, convex and concave functions) reflecting the cost of providing the service. The solution to this formulation gives the number of servers needed at different time instants to support these services. We implement a simple mechanism for time-shifting scheduled jobs in a simulator and study the reduction in server load using real traces from an operational IPTV network. Our results show that we are able to reduce the load by ~ 24% (compared to a possible ~ 31%). We also show that there are interesting open problems in designing mechanisms that allow time-shifting of load in such environments.

Cloud Computing

2013

Page 4: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

13    An Effective Network Traffic Classification Method with Unknown Flow Detection

Traffic classification technique is an essential tool for network and system security in the complex environments such as cloud computing based environment. The state-of-the-art traffic classification methods aim to take the advantages of flow statistical features and machine learning techniques, however the classification performance is severely affected by limited supervised information and unknown applications. To achieve effective network traffic classification, we propose a new method to tackle the problem of unknown applications in the crucial situation of a small supervised training set. The proposed method possesses the superior capability of detecting unknown flows generated by unknown applications and utilizing the correlation information among real-world network traffic to boost the classification performance. A theoretical analysis is provided to confirm performance benefit of the proposed method. Moreover, the comprehensive performance evaluation conducted on two real-world network traffic datasets shows that the proposed scheme outperforms the existing methods in the critical network environment.

Cloud Computing

2013

14 A Refined RBAC Model for Cloud Computing

Cloud computing is a fast growing field which is arguably a new computing paradigm. In cloud computing, computing resources are provided as services over the Internet and users can access resources on based on their payments. This paper discusses cloud computing and its related security risks, with a focus on access control. As a traditional access control mechanism, role-based access control (RBAC) model can be used to implement several important security principles such as least privilege, separation of duties, and data abstraction. This paper shows an on-going effort by refining entities in RBAC used for cloud computing, and further discusses their security implications. We argue that RBAC is well suited to many situations in cloud computing where users or applications can be clearly separated according to their job functions.

Cloud Computing

2012

15 Building Crawler Engine on Cloud Computing Infrastructure

This paper is aimed to create implementation crawler engine or search engine using cloud computing infrastructure. This approach use virtual machines on a cloud computing infrastructure to run service engine crawlers and also for application servers. Based on our initial experiments, this research has successfully built crawler engine that runs on Virtual Machine (VM) of cloud computing infrastructure. The use of Virtual Machine (VM) on this architecture will help to ease setup or installation, maintenance or VM terminating that has been running with some particular service crawler engine as needed. With this infrastructure, the increasing or decreasing in capacity and capability of multiple engine crawlers could set easily and more efficiently.

Cloud Computing

2012

16 Reliable Re-encryption in Unreliable Clouds

A key approach to secure cloud computing is for the data owner to store encrypted data in the cloud, and issue decryption keys to authorized users. Then, when a user is revoked, the data owner will issue re-encryption commands to the cloud to re-encrypt the data, to prevent the revoked user from decrypting the data, and to generate new decryption keys to valid users, so that they can continue to access the data. However, since a cloud computing environment is comprised of many cloud servers, such commands may not be received and executed by all of the cloud servers due to unreliable network communications. In this paper, we solve this problem by proposing a timebased re-encryption scheme, which enables the cloud servers to automatically re-encrypt data based on their internal clocks. Our solution is built on top of a new encryption scheme, attributebased encryption, to allow fine-grain access control, and does not require perfect clock synchronization for correctness.

Cloud Computing

2012

17 Towards Green P2P: Analysis of Energy Consumption in P2P and Approaches to Control

Nowadays, information and communication technology (ICT) has become more and more energy conscious. In this paper, we focus on peer-to-peer systems which contribute a major fraction of the Internet traffic. This paper proposes analytical models of energy consumption in P2P system. The model considers content pollution, the most common attack in P2P system, which has received little attention in previous work on green P2P. The analysis of the models shows that the popular sleep method in green computing potentially affects peer-to-peer performance. When the online time of clean copy holders is over cut, the system collapses. To find the balance between energy saving and system maintenance, the concept energy effectiveness is introduced. An approach for controlling energy consumption while keeping the system stable is suggested. We show that the whole system can be benefited if some warm-hearted and smart peers are willing to spend a little extra cost on energy, when most peers over cut their power on time. This approach can perfectly complement the popular sleep methods in green computing.

Cloud Computing

2012

18 Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud

In recent years ad-hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. Consequently, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this paper we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution.

Cloud Computing

2012

19 Dynamic Load- - Balanced Multicast

Data-intensive parallel applications on clouds need to deploy large data sets from the cloud’s storage facility toall compute nodes as fast as possible. Many multicast algorithms have been proposed for clusters and grid environments. The most common approach is to construct one or more spanning trees based on the network topology and network monitoring data in order to maximize available bandwidth and avoid bottleneck links.

Cloud Computing

2012

Page 5: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

for Data Intensive Applications on Clouds 2010

However, delivering optimal performance becomes difficult once the available bandwidth changes dynamically.

20 Service Oriented Architecture for Cloud based Travel Reservation Software as a Service

Cloud is gaining popularity as means for saving cost of IT ownership and accelerating time to market due to ready-to-use, dynamically scalable computing infrastructure and software services offered on Cloud on pay-per-use basis. Design of software solution for delivery as a shared service over Cloud requires specific considerations. In this paper we describe an approach for design of travel reservations solution for use by corporate business travelers based on Service Oriented Architecture, Software-as-a-Service, and Cloud Computing paradigms.

Cloud Computing

2012

21 A New Wireless Web Access Mode Based on Cloud Computing 2008

As most websites are designed for desktop PCs, it is extremely uncomfortable to browse these large pages on a wireless handheld device with small screen and limited user interface. So it is necessary to adapt these web pages to small screen devices. Besides, as the limited computing ability and capacity of storage of wireless handheld devices, it is also extremely challenging to deploy existing web page adaptation engine. By referring to huge computing ability and storage resource of cloud computing infrastructure, a new wireless web access mode is proposed. Firstly, the system framework is present. Subsequently, the two key components of system are described in detail: the one is distributed web page adaptation engine, which is designed for the purpose that the engine can be carried by computing cloud distributed and parallel; the other is distributed web page blocks management based on cloud computing, which is proposed so that the web page adaptation engine can be deployed reasonably. Moreover, a prototype system and a set of evaluation experiments have been implemented.

Cloud Computing

2012

Parallel and distributed22 Privacy

Preserving Data Sharing with Anonymous ID Assignment

An algorithm for anonymous sharing of private data among N parties is developed. This technique is used iteratively to assign these nodes ID numbers ranging from 1 to N. This assignment is anonymous in that the identities received are unknown to the other members of the group. Resistance to collusion among other members is verified in an information theoretic sense when private communication channels are used. This assignment of serial numbers allows more complex data to be shared and has applications to other problems in privacy preserving data mining, collision avoidance in communications and distributed database access. The required computations are distributed without using a trusted central authority. Existing and new algorithms for assigning anonymous IDs are examined with respect to trade-offs between communication and computational requirements. The new algorithms are built on top of a secure sum data mining operation using Newton's identities and Sturm's theorem. An algorithm for distributed solution of certain polynomials over finite fields enhances the scalability of the algorithms. Markov chain representations are used to find statistics on the number of iterations required, and computer algebra gives closed form results for the completion rates.

Parallel and distributed

2013

23 Grouping-Proofs-Based Authentication Protocol for Distributed RFID Systems

Along with radio frequency identification (RFID) becoming ubiquitous, security issues have attracted extensive attentions. Most studies focus on the single-reader and single-tag case to provide security protection, which leads to certain limitations for diverse applications. This paper proposes a grouping-proofs-based authentication protocol (GUPA) to address the security issue for multiple readers and tags simultaneous identification in distributed RFID systems. In GUPA, distributed authentication mode with independent subgrouping proofs is adopted to enhance hierarchical protection; an asymmetric denial scheme is applied to grant fault-tolerance capabilities against an illegal reader or tag; and a sequence-based odd-even alternation group subscript is presented to define a function for secret updating. Meanwhile, GUPA is analyzed to be robust enough to resist major attacks such as replay, forgery, tracking, and denial of proof. Furthermore, performance analysis shows that compared with the known grouping-proof or yoking-proof-based protocols, GUPA has lower communication overhead and computation load. It indicates that GUPA realizing both secure and simultaneous identification is efficient for resource-constrained distributed RFID systems.

Parallel and distributed

2013

Page 6: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

24     SPOC: A Secure and Privacy-Preserving Opportunistic Computing Framework for Mobile-Healthcare Emergency

With the pervasiveness of smart phones and the advance of wireless body sensor networks (BSNs), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider into a pervasive environment for better health monitoring, has attracted considerable interest recently. However, the flourish of m-Healthcare still faces many challenges including information security and privacy preservation. In this paper, we propose a secure and privacy-preserving opportunistic computing framework, called SPOC, for m-Healthcare emergency. With SPOC, smart phone resources including computing power and energy can be opportunistically gathered to process the computing-intensive personal health information (PHI) during m-Healthcare emergency with minimal privacy disclosure. In specific, to leverage the PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare emergency, we introduce an efficient user-centric privacy access control in SPOC framework, which is based on an attribute-based access control and a new privacy-preserving scalar product computation (PPSPC) technique, and allows a medical user to decide who can participate in the opportunistic computing to assist in processing his overwhelming PHI data. Detailed security analysis shows that the proposed SPOC framework can efficiently achieve user-centric privacy access control in m-Healthcare emergency. In addition, performance evaluations via extensive simulations demonstrate the SPOC's effectiveness in term of providing high-reliable-PHI process and transmission while minimizing the privacy disclosure during m-Healthcare emergency.

Parallel and distributed

2013

37       FireCol: a collaborative protection network for the detection of flooding DDoS attacks

  Parallel and distributed

2013

38  Timely and continuous machine-learning-based classification for interactive IP traffic

  parallel and distributed

2013

39  Privacy- and integrity-preserving range queries in sensor networks

  Parallel and distributed

2013

40   Anomaly extraction in backbone networks using association rules

  Parallel and distributed

2013

41  Signature Neural Networks: Definition and

  Parallel and distributed

2013

Page 7: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

Application to Multidimensional Sorting Problems

42   Blind Image Quality Assessment Using a General Regression Neural Network.

  Parallel and distributed

2013

43 Modeling of Complex-Valued Wiener Systems Using B-Spline Neural Network. 

  Parallel and distributed

2013

Knowledge and data44    A Fast

Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data

Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST) is proposed and experimentally evaluated in this paper. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent, the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST) clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are carried out to compare FAST and several representative feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with respect to four types of well-known classifiers, namely, the probability-based Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature selection. The results, on 35 publicly available real-world high-dimensional image, microarray, and text data, demonstrate that the FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers.

Knowledge and data

2013

45 Anomaly Detection via Online Oversampling principal Component Analysis

Anomaly detection has been an important research topic in data mining and machine learning. Many real-world applications such as intrusion or credit card fraud detection require an effective and efficient framework to identify deviated data instances. However, most anomaly detection methods are typically implemented in batch mode, and thus cannot be easily extended to large-scale problems without sacrificing computation and memory requirements. In this paper, we propose an online oversampling principal component analysis (osPCA) algorithm to address this problem, and we aim at detecting the presence of outliers from a large amount of data via an online updating technique. Unlike prior principal component analysis (PCA)-based approaches, we do not store the entire data matrix or covariance matrix, and thus our approach is especially of interest in online or large-scale problems. By oversampling the target instance and extracting the principal direction of the data, the proposed osPCA allows us to determine the anomaly of the target instance according to the variation of the resulting dominant eigenvector. Since our osPCA need not perform eigen analysis explicitly, the proposed framework is favored for online applications which have computation or memory limitations. Compared with the well-known power method for PCA and other popular anomaly detection algorithms, our experimental results verify the feasibility of our proposed method in terms of both accuracy and efficiency.

Knowledge and data

2013

Page 8: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

46   Lineage Encoding: An Efficient Wireless XML Streaming Supporting Twig Pattern Queries

In this paper, we propose an energy and latency efficient XML dissemination scheme for the mobile computing. We define a novel unit structure called G-node for streaming XML data in the wireless environment. It exploits the benefits of the structure indexing and attribute summarization that can integrate relevant XML elements into a group. It provides a way for selective access of their attribute values and text content. We also propose a lightweight and effective encoding scheme, called Lineage Encoding, to support evaluation of predicates and twig pattern queries over the stream. The Lineage Encoding scheme represents the parent-child relationships among XML elements as a sequence of bit-strings, called Lineage Code(V, H), and provides basic operators and functions for effective twig pattern query processing at mobile clients. Extensive experiments using real and synthetic data sets demonstrate our scheme outperforms conventional wireless XML broadcasting methods for simple path queries as well as complex twig pattern queries with predicate conditions.

Knowledge and data

2013

47   MKBoost: A Framework of Multiple Kernel Boosting

Multiple kernel learning (MKL) is a promising family of machine learning algorithms using multiple kernel functions for various challenging data mining tasks. Conventional MKL methods often formulate the problem as an optimization task of learning the optimal combinations of both kernels and classifiers, which usually results in some forms of challenging optimization tasks that are often difficult to be solved. Different from the existing MKL methods, in this paper, we investigate a boosting framework of MKL for classification tasks, i.e., we adopt boosting to solve a variant of MKL problem, which avoids solving the complicated optimization tasks. Specifically, we present a novel framework of Multiple kernel boosting (MKBoost), which applies the idea of boosting techniques to learn kernel-based classifiers with multiple kernels for classification problems. Based on the proposed framework, we propose several variants of MKBoost algorithms and extensively examine their empirical performance on a number of benchmark data sets in comparisons to various state-of-the-art MKL algorithms on classification tasks. Experimental results show that the proposed method is more effective and efficient than the existing MKL techniques.

Knowledge and data

2013

48       TACI: Taxonomy-Aware Catalog Integration

A fundamental data integration task faced by online commercial portals and commerce search engines is the integration of products coming from multiple providers to their product catalogs. In this scenario, the commercial portal has its own taxonomy (the “master taxonomy”), while each data provider organizes its products into a different taxonomy (the “provider taxonomy”). In this paper, we consider the problem of categorizing products from the data providers into the master taxonomy, while making use of the provider taxonomy information. Our approach is based on a taxonomy-aware processing step that adjusts the results of a text-based classifier to ensure that products that are close together in the provider taxonomy remain close in the master taxonomy. We formulate this intuition as a structured prediction optimization problem. To the best of our knowledge, this is the first approach that leverages the structure of taxonomies in order to enhance catalog integration. We propose algorithms that are scalable and thus applicable to the large data sets that are typical on the web. We evaluate our algorithms on real-world data and we show that taxonomy-aware classification provides a significant improvement over existing approaches.

Knowledge and data

2013

49      Mining Order-Preserving Submatrices from Data with Repeated Measurements

Order-preserving submatrices (OPSM's) have been shown useful in capturing concurrent patterns in data when the relative magnitudes of data items are more important than their exact values. For instance, in analyzing gene expression profiles obtained from microarray experiments, the relative magnitudes are important both because they represent the change of gene activities across the experiments, and because there is typically a high level of noise in data that makes the exact values untrustable. To cope with data noise, repeated experiments are often conducted to collect multiple measurements. We propose and study a more robust version of OPSM, where each data item is represented by a set of values obtained from replicated experiments. We call the new problem OPSM-RM (OPSM with repeated measurements). We define OPSM-RM based on a number of practical requirements. We discuss the computational challenges of OPSM-RM and propose a generic mining algorithm. We further propose a series of techniques to speed up two time dominating components of the algorithm. We show the effectiveness and efficiency of our methods through a series of experiments conducted on real microarray data.

Knowledge and data

2013

Networking

16 A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop

Quantifying the end-to-end delay performance in multihop wireless networks is a well-known challenging problem. In this paper, we propose a new joint congestion control and scheduling algorithm for multihop wireless networks with fixed-route flows operated under a general interference model with interference degree K. Our proposed algorithm not only achieves a provable throughput guarantee (which is close to at least 1/K of the system capacity region), but also leads to explicit upper bounds on the end-to-end delay of every flow. Our end-to-end delay and throughput bounds are in simple and closed forms, and they explicitly quantify the tradeoff between throughput and delay of every flow. Furthermore, the per-flow end-to-end delay bound increases linearly with the number of hops that the flow passes through, which is order-optimal with respect to the number of hops. Unlike traditional solutions based on the back-pressure algorithm, our proposed algorithm combines window-based flow control with a new rate-based distributed scheduling algorithm. A key contribution of our work is to use a novel stochastic dominance approach to bound the corresponding per-flow throughput and delay, which otherwise are often intractable in these types of systems. Our sproposed algorithm is fully distributed and requires a low per-node

Networking domain

2013

Page 9: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

wireless Networks With Order-Optimal Per-Flow Delay

complexity that does not increase with the network size. Hence, it can be easily implemented in practice.

17 ICTCP: Incast Congestion Congestion Control for TCP in Data-Center Networks

Transport Control Protocol (TCP) incast congestion happens in high-bandwidth and low-latency networks when multiple synchronized servers send data to the same receiver in parallel. For many important data-center applications such as MapReduce and Search, this many-to-one traffic pattern is common. Hence TCP incast congestion may severely degrade their performances, e.g., by increasing response time. In this paper, we study TCP incast in detail by focusing on the relationships between TCP throughput, round-trip time (RTT), and receive window. Unlike previous approaches, which mitigate the impact of TCP incast congestion by using a fine-grained timeout value, our idea is to design an Incast congestion Control for TCP (ICTCP) scheme on the receiver side. In particular, our method adjusts the TCP receive window proactively before packet loss occurs. The implementation and experiments in our testbed demonstrate that we achieve almost zero timeouts and high goodput for TCP incast.

Networking domain

2013

18 An Efficient and Robust Addressing Protocol for Node Autoconfiguration in Ad Hoc Networks

Address assignment is a key challenge in ad hoc networks due to the lack of infrastructure. Autonomous addressing protocols require a distributed and self-managed mechanism to avoid address collisions in a dynamic network with fading channels, frequent partitions, and joining/leaving nodes. We propose and analyze a lightweight protocol that configures mobile ad hoc nodes based on a distributed address database stored in filters that reduces the control load and makes the proposal robust to packet losses and network partitions. We evaluate the performance of our protocol, considering joining nodes, partition merging events, and network initialization. Simulation results show that our protocol resolves all the address collisions and also reduces the control traffic when compared to previously proposed protocols.

Networking domain

2013

19 NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

Cloud security is one of most important issues that has attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). DDoS attacks usually involve early stage actions such as multistep exploitation, low-frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multiphase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph-based analytical models and reconfigurable virtual network-based countermeasures. The proposed framework leverages OpenFlow network programming APIs to build a monitor and control plane over distributed programmable virtual switches to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution.

Networking domain

2013

20 Revealing Density-Based Clustering Structure from the Core-Connected Tree of a Network

Clustering is an important technique for mining the intrinsic community structures in networks. The density-based network clustering method is able to not only detect communities of arbitrary size and shape, but also identify hubs and outliers. However, it requires manual parameter specification to define clusters, and is sensitive to the parameter of density threshold which is difficult to determine. Furthermore, many real-world networks exhibit a hierarchical structure with communities embedded within other communities. Therefore, the clustering result of a global parameter setting cannot always describe the intrinsic clustering structure accurately. In this paper, we introduce a novel density-based network clustering method, called graph-skeleton-based clustering (gSkeletonClu). By projecting an undirected network to its core-connected maximal spanning tree, the clustering problem can be converted to detect core connectivity components on the tree. The density-based clustering of a specific parameter setting and the hierarchical clustering structure both can be efficiently extracted from the tree. Moreover, it provides a convenient way to automatically select the parameter and to achieve the meaningful cluster tree in a network. Extensive experiments on both real-world and synthetic networks demonstrate the superior performance of gSkeletonClu for effective and efficient density-based clustering.

Networking domain

2013

21 Fast Transmission to Remote Cooperative Groups:

The problem of efficiently and securely broadcasting to a remote cooperative group occurs in many newly emerging networks. A major challenge in devising such systems is to overcome the obstacles of the potentially limited communication from the group to the sender, the unavailability of a fully trusted key generation center, and the dynamics of the sender. The existing key management paradigms cannot deal with these challenges effectively. In this paper, we circumvent these obstacles and close this gap by proposing a novel key management paradigm. The new paradigm is a hybrid of traditional broadcast encryption and group key agreement. In such a system, each member maintains a single public/secret key pair. Upon seeing the public keys of the members, a remote sender

Networking domain

2013

Page 10: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

A New Key Management Paradigm

can securely broadcast to any intended subgroup chosen in an ad hoc way. Following this model, we instantiate a scheme that is proven secure in the standard model. Even if all the nonintended members collude, they cannot extract any useful information from the transmitted messages. After the public group encryption key is extracted, both the computation overhead and the communication cost are independent of the group size. Furthermore, our scheme facilitates simple yet efficient member deletion/addition and flexible rekeying strategies. Its strong security against collusion, its constant overhead, and its implementation friendliness without relying on a fully trusted authority render our protocol a very promising solution to many applications.

22 Peer-Assisted Social Media Streaming with Social Reciprocity

Online video sharing and social networking are cross-pollinating rapidly in today's Internet: Online social network users are sharing more and more media contents among each other, while online video sharing sites are leveraging social connections among users to promote their videos. An intriguing development as it is, the operational challenge in previous video sharing systems persists, em i.e., the large server cost demanded for scaling of the systems. Peer-to-peer video sharing could be a rescue, only if the video viewers' mutual resource contribution has been fully incentivized and efficiently scheduled. Exploring the unique advantages of a social network based video sharing system, we advocate to utilize social reciprocities among peers with social relationships for efficient contribution incentivization and scheduling, so as to enable high-quality video streaming with low server cost. We exploit social reciprocity with two give-and-take ratios at each peer: (1) peer contribution ratio (em PCR), which evaluates the reciprocity level between a pair of social friends, and (2) system contribution ratio (em SCR), which records the give-and-take level of the user to and from the entire system. We design efficient peer-to-peer mechanisms for video streaming using the two ratios, where each user optimally decides which other users to seek relay help from and help in relaying video streams, respectively, based on combined evaluations of their social relationship and historical reciprocity levels. Our design achieves effective incentives for resource contribution, load balancing among relay peers, as well as efficient social-aware resource scheduling. We also discuss practical implementation and implement our design in a prototype social media sharing system. Our extensive evaluations based on PlanetLab experiments verify that high-quality large-scale social media sharing can be achieved with conservative server costs.

Networking domain

2013

23 Efficient Storage and Processing of High –Volume Network Monitoring Data

Monitoring modern networks involves storing and transferring huge amounts of data. To cope with this problem, in this paper we propose a technique that allows to transform the measurement data in a representation format meeting two main objectives at the same time. Firstly, it allows to perform a number of operations directly on the transformed data with a controlled loss of accuracy, thanks to the mathematical framework it is based on. Secondly, the new representation has a small memory footprint, allowing to reduce the space needed for data storage and the time needed for data transfer. To validate our technique, we perform an analysis of its performance in terms of accuracy and memory footprint. The results show that the transformed data closely approximates the original data (within 5% relative error) while achieving a compression ratio of 20%; storage footprint can also be gradually reduced towards the one of the state-of-the-art compression tools, such as bzip2, if higher approximation is allowed. Finally, a sensibility analysis show that technique allows to trade-off the accuracy on different input fields so to accommodate for specific application needs, while a scalability analysis indicates that the technique scales with input size spanning up to three orders of magnitude.

Networking domain

2013

24 A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks With Order-Optimal Per-Flow Delay

Quantifying the end-to-end delay performance in multihop wireless networks is a well-known challenging problem. In this paper, we propose a new joint congestion control and scheduling algorithm for multihop wireless networks with fixed-route flows operated under a general interference model with interference degree K. Our proposed algorithm not only achieves a provable throughput guarantee (which is close to at least 1/K of the system capacity region), but also leads to explicit upper bounds on the end-to-end delay of every flow. Our end-to-end delay and throughput bounds are in simple and closed forms, and they explicitly quantify the tradeoff between throughput and delay of every flow. Furthermore, the per-flow end-to-end delay bound increases linearly with the number of hops that the flow passes through, which is order-optimal with respect to the number of hops. Unlike traditional solutions based on the back-pressure algorithm, our proposed algorithm combines window-based flow control with a new rate-based distributed scheduling algorithm. A key contribution of our work is to use a novel stochastic dominance approach to bound the corresponding per-flow throughput and delay, which otherwise are often intractable in these types of systems. Our proposed algorithm is fully distributed and requires a low per-node complexity that does not increase with the network size. Hence, it can be easily implemented in practice.

Networking domain

2013

Page 11: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

25 Optimal Content Placement for Peer-to-Peer Video-on-Demand Systems

In this paper, we address the problem of content placement in peer-to-peer (P2P) systems, with the objective of maximizing the utilization of peers' uplink bandwidth resources. We consider system performance under a many-user asymptotic. We distinguish two scenarios, namely “Distributed Server Networks” (DSNs) for which requests are exogenous to the system, and “Pure P2P Networks” (PP2PNs) for which requests emanate from the peers themselves. For both scenarios, we consider a loss network model of performance and determine asymptotically optimal content placement strategies in the case of a limited content catalog. We then turn to an alternative “large catalog” scaling where the catalog size scales with the peer population. Under this scaling, we establish that storage space per peer must necessarily grow unboundedly if bandwidth utilization is to be maximized. Relating the system performance to properties of a specific random graph model, we then identify a content placement strategy and a request acceptance policy that jointly maximize bandwidth utilization, provided storage space per peer grows unboundedly, although arbitrarily slowly, with system size.

Networking domain

2013

26 Throughput-Optimal Scheduling in Multihop Wireless Networks Without Per-Flow Information

In this paper, we consider the problem of link scheduling in multihop wireless networks under general interference constraints. Our goal is to design scheduling schemes that do not use per-flow or per-destination information, maintain a single data queue for each link, and exploit only local information, while guaranteeing throughput optimality. Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information. It is usually difficult to obtain and maintain this type of information, especially in large networks, where there are numerous flows. Also, the back-pressure algorithm maintains a complex data structure at each node, keeps exchanging queue-length information among neighboring nodes, and commonly results in poor delay performance. In this paper, we propose scheduling schemes that can circumvent these drawbacks and guarantee throughput optimality. These schemes use either the readily available hop-count information or only the local information for each link. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal. We also conduct simulations to validate our theoretical results in various settings and show that the proposed schemes can substantially improve the delay performance in most scenarios

networking domain

2013

27 Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks

Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing-coding tradeoff.

networking domain

2013

28 LDTS: A Lightweight and Dependable Trust System for Clustered Wireless Sensor Networks

The resource efficiency and dependability of a trust system are the most fundamental requirements for any wireless sensor network (WSN). However, existing trust systems developed for WSNs are incapable of satisfying these requirements because of their high overhead and low dependability. In this work, we proposed a lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms. First, a lightweight trust decision-making scheme is proposed based on the nodes' identities (roles) in the clustered WSNs, which is suitable for such WSNs because it facilitates energy-saving. Due to canceling feedback between cluster members (CMs) or between cluster heads (CHs), this approach can significantly improve system efficiency while reducing the effect of malicious nodes. More importantly, considering that CHs take on large amounts of data forwarding and communication tasks, a dependability-enhanced trust evaluating approach is defined for cooperations between CHs. This approach can effectively reduce networking consumption while malicious, selfish, and faulty CHs. Moreover, a self-adaptive weighted method is defined for trust aggregation at CH level. This approach surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Theory as well as simulation results shows that LDTS demands less memory and communication overhead compared with the current typical trust systems for WSNs.

Networking domain

2013

Page 12: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

29 1. HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable Access Control in Cloud Computing

Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.

Networking domain

2013

30 CAM: Cloud-Assisted Privacy Preserving Mobile Health Monitoring

Cloud-assisted mobile health (mHealth) monitoring, which applies the prevailing mobile communications and cloud computing technologies to provide feedback decision support, has been considered as a revolutionary approach to improving the quality of healthcare service while lowering the healthcare cost. Unfortunately, it also poses a serious risk on both clients' privacy and intellectual property of monitoring service providers, which could deter the wide adoption of mHealth technology. This paper is to address this important problem and design a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of the involved parties and their data. Moreover, the outsourcing decryption technique and a newly proposed key private proxy reencryption are adapted to shift the computational complexity of the involved parties to the cloud without compromising clients' privacy and service providers' intellectual property. Finally, our security and performance analysis demonstrates the effectiveness of our proposed design.

Networking domain

2013

31 Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler

Research into developing effective computer aided techniques for planning software projects is important and challenging for software engineering. Different from projects in other fields, software projects are people-intensive activities and their related resources are mainly human resources. Thus, an adequate model for software project planning has to deal with not only the problem of project task scheduling but also the problem of human resource allocation. But as both of these two problems are difficult, existing models either suffer from a very large search space or have to restrict the flexibility of human resource allocation to simplify the model. To develop a flexible and effective model for software project planning, this paper develops a novel approach with an event-based scheduler (EBS) and an ant colony optimization (ACO) algorithm. The proposed approach represents a plan by a task list and a planned employee allocation matrix. In this way, both the issues of task scheduling and employee allocation can be taken into account. In the EBS, the beginning time of the project, the time when resources are released from finished tasks, and the time when employees join or leave the project are regarded as events. The basic idea of the EBS is to adjust the allocation of employees at events and keep the allocation unchanged at nonevents. With this strategy, the proposed method enables the modeling of resource conflict and task preemption and preserves the flexibility in human resource allocation. To solve the planning problem, an ACO algorithm is further designed. Experimental results on 83 instances demonstrate that the proposed method is very promising.

Networking domain

2013

Page 13: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

32 1.   A Multiagent Modeling and Investigation of Smart Homes With Power Generation, Storage, and Trading Features --grid computing

Smart homes, as active participants in a smart grid, may no longer be modeled by passive load curves; because their interactive communication and bidirectional power flow within the smart grid affects demand, generation, and electricity rates. To consider such dynamic environmental properties, we use a multiagent-system-based approach in which individual homes are autonomous agents making rational decisions to buy, sell, or store electricity based on their present and expected future amount of load, generation, and storage, accounting for the benefits each decision can offer. In the proposed scheme, home agents prioritize their decisions based on the expected utilities they provide. Smart homes' intention to minimize their electricity bills is in line with the grid's aim to flatten the total demand curve. With a set of case studies and sensitivity analyses, we show how the overall performance of the home agents converges-as an emergent behavior-to an equilibrium benefiting both the entities in different operational conditions and determines the situations in which conventional homes would benefit from purchasing their own local generation-storage systems.

Networking domain

2013

33   AMPLE: An Adaptive Traffic Engineering System Based on Virtual Routing Topologies

Handling traffic dynamics in order to avoid network congestion and subsequent service disruptions is one of the key tasks performed by contemporary network management systems. Given the simple but rigid routing and forwarding functionalities in IP base environments, efficient resource management and control solutions against dynamic traffic conditions is still yet to be obtained. In this article, we introduce AMPLE — an efficient traffic engineering and routing topologies for long term operation through the optimized setting of link weights. Based on these diverse paths, adaptive traffic control performs intelligent traffic splitting across individual routing topologies in reaction to the monitored network dynamics at short timescale. According to our evaluation with real network topologies and traffic traces, the proposed system is able to cope almost optimally with unpredicted traffic dynamics and, as such, it constitutes a new proposal for achieving better quality of service and overall network performance in IP networks. Management system that performs adaptive traffic control by using multiple virtualized routing topologies. The proposed system consists of two complementary components: offline link weight optimization that takes as input the physical network topology and tries to produce maximum routing path diversity across multiple virtual

NETWORKING

2012

34    Computing localized power efficient data aggregation in Trees for sensor network

We propose localized, self organizing, robust, and energy-efficient data aggregation tree approaches for sensor networks, which we call Localized Power-Efficient Data Aggregation Protocols (L-PEDAPs). They are based on topologies, such as LMST and RNG, that can approximate minimum spanning tree and can be efficiently computed using only position or distance information of one-hop neighbors. The actual routing tree is constructed over these topologies. We also consider different parent selection strategies while constructing a routing tree. We compare each topology and parent selection strategy and conclude that the best among them is the shortest path strategy over LMST structure. Our solution also involves route Maintenance procedures that will be executed when a sensor node fails or a new node is added to the network. The proposed solution is also adapted to consider the remaining power levels of nodes in order to increase the network lifetime. Our simulation results show that by using our power-aware localized approach, we can almost have the same performance of a centralized solution in terms of network lifetime, and close to 90 percent of an upper bound derived here.

NETWORKING

2012

35 Improving Energy Saving and Reliability in Wireless Sensor Networks Using a Simple CRT-Based Packet-Forwarding Solution

This paper deals with a novel forwarding scheme for wireless sensor networks aimed at combining low computational complexity and high performance in terms of energy efficiency and reliability. The proposed approach relies on a packet-splitting algorithm based on the Chinese Remainder Theorem (CRT) and is characterized by a simple modular division between integers. An analytical model for estimating the energy efficiency of the scheme is presented, and several practical issues such as the effect of unreliable channels, topology changes, and MAC overhead are discussed. The results obtained show that the proposed algorithm outperforms traditional approaches in terms of power saving, simplicity, and fair distribution of energy consumption among all nodes in the network.

NETWORKING

2012

36   Continuous

Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular Web sites. Web site administrators

NETWORKING

2012

Page 14: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

Neighbor Discovery in Asynchronous Sensor Networks

routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior—servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained

37 1.      Energy Efficient Routing Mechanism in Wireless Sensor Network

This paper gives a brief idea about wireless sensor networks and energy efficient routing in wireless sensor networks. Sensor networks are deployed in an ad hoc fashion, with individual nodes remaining largely inactive for long periods of time, but then becoming suddenly active when something is detected. Sensor Networks are generally battery constrained. They are prone to failure, and therefore the sensor network topology changes frequently. In this paper, we propose a routing algorithm for Wireless Sensor Networks combining Energy Efficient and Hierarchical based routing techniques which minimize the energy consumption, increase the lifetime of the sensor nodes and saves battery power.

NETWORKING

2012

38    Adaptive Opportunistic Routing for Wireless Ad Hoc Networks

A distributed adaptive opportunistic routing scheme for multihop wireless ad hoc networks is proposed. The proposed scheme utilizes a reinforcement learning framework to opportunistically route the packets even in the absence of reliable knowledge about channel statistics and network model. This scheme is shown to be optimal with respect to an expected average per-packet reward criterion. The proposed routing scheme jointly addresses the issues of learning and routing in an opportunistic context, where the network structure is characterized by the transmission success probabilities. In particular, this learning framework leads to a stochastic routing scheme that optimally “explores” and “exploits” the opportunities in the network.

NETWORKING

2012

39     The COQUOS approach to continuous queries in unstructured overlays. 2011

The current peer-to-peer (P2P) content distribution systems are constricted by their simple on-demand content discovery mechanism. The utility of these systems can be greatly enhanced by incorporating two capabilities, namely a mechanism through which peers can register their long term interests with the network so that they can be continuously noti_ed of new data items, and a means for the peers to advertise their contents. Although researchers have proposed a few unstructured overlay-based publish-subscribe systems that provide the above capabilities, most of these systems require intricate indexing and routing schemes, which not only make them highly complex but also render the overlay network less _exible towards transient peers. This paper argues that for many P2P applications implementing full-_edged publish-subscribe systems is an overkill. For theseapplications, we study the alternate continuous query paradigm, which is a best-effort service providing the above two capabilities. We present a scalable and effective middleware called CoQUOS for supporting continuous queries in unstructured overlay networks.Besides being independent of the overlay topology, CoQUOS preserves the simplicity and _exibility of the unstructured P2P network. Our design of the CoQUOS system is characterized by two novel techniques, namely cluster-resilient random walk algorithm for propagating the queries to various regions of the network and dynamic probability-based query registration scheme to ensure that the registrations are well distributed in the overlay. Further, we also develop effective and ef_cient schemes for providing resilience to the churn of the P2P network and for ensuring a fair distribution of the noti_cation load among the peers. This paper studies the properties of our algorithms through theoretical analysis. We also report series of experiments evaluating the effectiveness and the costs of the proposed schemes.

NETWORKING

2012

40 Secure Data Transmission In Wireless Broadcast Services With Efficient Key Management

Wireless broadcast is an effective approach for disseminating data to a number of users. To provide secure access to data in wireless broadcast services, symmetric-key-based encryption is used to ensure that only users who own the valid keys can decrypt the data. With regard to various subscriptions, an efficient key management for distributing and changing keys is in great demand for access control in broadcast services. In this paper, we propose an efficient key management scheme, namely, key tree reuse (KTR), to handle key distribution with regard to complex subscription options and user activities. Key Tree Reuse has the following advantages. First, it supports all subscription activities in wireless broadcast services. Second, in KTR, a user only needs to hold one set of keys for all subscribed programs instead of separate sets of keys for each program. Third, KTR identifies the minimum set of keys that must be changed to ensure broadcast security and minimize the rekey cost. Our simulations show that KTR can save about 45 percent of communication overhead in the broadcast channel and about 50 percent of decryption cost for each user compared with logical-key-hierarchy-based approaches.

NETWORKING

2012

NETWORK + SECURITY41 Design

and Implementation of

The multi-hop routing in wireless sensor networks (WSNs) offers little protection against identity deception through replaying routing information. An adversary can exploit this defect to launch various harmful or even devastating attacks against the routing protocols, including sinkhole attacks, wormhole attacks and Sybil attacks. The situation is further aggravated by mobile and harsh network conditions. Traditional cryptographic techniques or efforts at developing trust-aware routing protocols do not effectively address this severe problem. To secure the

NETWORK + SECURITY

2012

Page 15: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

TARF: A Trust-Aware Routing Framework for WSN

WSNs against adversaries misdirecting the multi-hop routing, we have designed and implemented TARF, a robust trust-aware routing framework for dynamic WSNs. Without tight time synchronization or known geographic information, TARF provides trustworthy and energy-efficient route. Most importantly, TARF proves effective against those harmful attacks developed out of identity deception; the resilience of TARF is verified through extensive evaluation with both simulation and empirical experiments on large-scale WSNs under various scenarios including mobile and RF-shielding network conditions. Further, we have implemented a low-overhead TARF module in Tiny OS; as demonstrated, this implementation can be incorporated into existing routing protocols with the least effort. Based on TARF, we also demonstrated a proof-of-concept mobile target detection application that functions well against an anti-detection mechanism.

42    Design and Implementation of TARF: A Trust-Aware Routing Framework for WSN

The multi-hop routing in wireless sensor networks (WSNs) offers little protection against identity deception through replaying routing information. An adversary can exploit this defect to launch various harmful or even devastating attacks against the routing protocols, including sinkhole attacks, wormhole attacks and Sybil attacks. The situation is further aggravated by mobile and harsh network conditions. Traditional cryptographic techniques or efforts at developing trust-aware routing protocols do not effectively address this severe problem. To secure the WSNs against adversaries misdirecting the multi-hop routing, we have designed and implemented TARF, a robust trust-aware routing framework for dynamic WSNs. Without tight time synchronization or known geographic information, TARF provides trustworthy and energy-efficient route. Most importantly, TARF proves effective against those harmful attacks developed out of identity deception; the resilience of TARF is verified through extensive evaluation with both simulation and empirical experiments on large-scale WSNs under various scenarios including mobile and RF-shielding network conditions. Further, we have implemented a low-overhead TARF module in Tiny OS; as demonstrated, this implementation can be incorporated into existing routing protocols with the least effort. Based on TARF, we also demonstrated a proof-of-concept mobile target detection application that functions well against an anti-detection mechanism.

NETWORK + SECURITY

2012

43   Secure Authentication Scheme in Wireless Networks

Mobile users enjoy seamless roaming over wireless environment. The wireless network is cumbersome and error prone, thus there is a need for a good and strong authentication scheme which should be designed in such a way that it retains the privacy of the user .It should also be capable of providing minimized communication overhead as most of the exchange of messages in wireless network is found to be the exchange of messages meant for authentication. This results in clumsy environment .The proposed scheme brings out the solution for the above mentioned problems, where the authentication procedure consists of only four messages exchanged between home agent, mobile user and foreign agent. Also the light weight authentication scheme with user anonymity is presented. Apart from that other main issues that are to be solved are prevention of fraud, updating of session key periodically, no need of password verification table and single registration of user to home network. It provides security in protecting the password even if the information is disclosed. The proposed scheme deserves the property of protecting the wireless network from various attacks. And the proposed scheme is simple and user friendly.

NETWORK + SECURITY

2012

44 Privacy-Preserving Decentralized Key-Policy Attribute-Based Encryption

Decentralized attribute-based encryption (ABE) is a variant of a multi-authority ABE scheme where each authority can issue secret keys to the user independently without any cooperation and a central authority. This is in contrast to the previous constructions, where multiple authorities must be online and setup the system interactively, which is impractical. Hence, it is clear that a decentralized ABE scheme eliminates the heavy communication cost and the need for collaborative computation in the setup stage. Furthermore, every authority can join or leave the system freely without the necessity of re-initializing the system. In contemporary multi-authority ABE schemes, a user’s secret keys from different authorities must be tied to his global identiFer (GID) to resist the collusion attack. However, this will compromise the user’s privacy. Multiple authorities can collaborate to trace the user by his GID, collect his attributes, then impersonate him. Therefore, constructing a decentralized ABE scheme with privacy-preserving remains a challenging research problem. In this paper, we propose a privacy-preserving decentralized key-policy ABE scheme where each authority can issue secret keys to a user independently without knowing anything about his GID. Therefore, even if multiple authorities are corrupted, they cannot collect the user’s attributes by tracing his GID. Notably, our scheme only requires standard complexity assumptions (e.g., decisional bilinear Dif?e-Hellman) and does not require any cooperation between the multiple authorities, in contrast to the previous comparable scheme that requires non-standard complexity assumptions (e.g., q-decisional Diffie-Hellman inversion) and interactions among multiple authorities. To the best of our knowledge, it is the First decentralized ABE scheme with privacy-preserving based on standard complexity assumptions.

NETWORK + SECURITY

2012

Page 16: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

45    A Three Party Authentication for Key Distributed Protocol Using Classical and Quantum Cryptography

In the existing study of third party authentication, for message transformation has less security against attacks such as man-in-the-middle, efficiency and so on. In this approach, we at hand give a Quantum Key Distribution Protocol (QKDP) to safeguard the security in larger networks, which uses the combination of merits of classical cryptography and quantum cryptography. Two three-party QKDPs, one implemented with implicit user authentication and the other with explicit mutual authentication, which include the following: 1. Security against such attacks as the man-in-themiddle, eavesdropping and replay. 2. Efficiency is improved as the proposed protocols contain the fewest number of communication rounds among the existing QKDPs. 3. Two parties can share and use a long-term secret (repeatedly). To prove the security of the proposed schemes, this work also presents a new primitive called the Unbiased-Chosen Basis (UCB) assumption.

NETWORK + SECURITY

2012

46 1.      Nymble: Blocking Misbehaving Users in Anonym zing Networks

Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular Web sites. Web site administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior—servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained.

NETWORK + SECURITY

2012

47 1.      FireCol: A Collaborative Protection Network for the Detection of Flooding DDoS Attacks

Distributed denial-of-service (DDoS) attacks remain a major security problem, the mitigation of which is very hard especially when it comes to highly distributed botnet-based attacks. The early discovery of these attacks, although challenging, is necessary to protect end-users as well as the expensive network infrastructure resources. In this paper, we address the problem of DDoS attacks and present the theoretical foundation, architecture, and algorithms of FireCol. The core of FireCol is composed of intrusion prevention systems (IPSs) located at the Internet service providers (ISPs) level. The IPSs form virtual protection rings around the hosts to defend and collaborate by exchanging selected traffic information. The evaluation of FireCol using extensive simulations and a real dataset is presented, showing FireCol effectiveness and low overhead, as well as its support for incremental deployment in real networks.

NETWORK + SECURITY

2012

48 A Secure Intrusion detection system against DDOS attack in Wireless Mobile Ad-hoc Network

situations like battlefields and commercial applications such as building, traffic surveillance, MANET is infrastructure less, Wireless Mobile ad-hoc network (MANET) is an emerging technology and have great strength to be applied in critical with no any centralized controller exist and also each node contain routing capability, Each device in a MANET is independently free to move in any direction, and will therefore change its connections to other devices frequently. So one of the major challenges wireless mobile ad-hoc networks face today is security, because no central controller exists. MANETs are a kind of wireless ad hoc networks that usually has a routable networking environment on top of a link layer ad hoc network. Ad hoc also contains wireless sensor network so the problems is facing by sensor network is also faced by MANET. While developing the sensor nodes in unattended environment increases the chances of various attacks. There are many security attacks in MANET and DDoS (Distributed denial of service) is one of them. Our main aim is seeing the effect of DDoS in routing load, packet drop rate, end to end delay, i.e. maximizing due to attack on network. And with these parameters and many more also we build secure IDS to detect this kind of attack and block it. In this paper we discussed some attacks on MANET and DDOS also and provide the security against the DDOS attack.

NETWORK + SECURITY

2012

Page 17: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

49     Automatic Discovery of Association Orders between Name and Aliases from the Web using Anchor Texts-based Co-occurrences

Many celebrities and experts from various fields may have been referred by not only their personal names but also by their aliases on web. Aliases are very important in information retrieval to retrieve complete information about a personal name from the web, as some of the web pages of the person may also be referred by his aliases. The aliases for a personal name are extracted by previously proposed alias extraction method. In information retrieval, the web search engine automatically expands the search query on a person name by tagging his aliases for complete information retrieval thereby improving recall in relation detection task and achieving a significant mean reciprocal rank (MRR) of search engine. For the further substantial improvement on recall and MRR from the previously proposed methods, our proposed method will order the aliases based on their associations with the name using the definition of anchor texts-based co-occurrences between name and aliases in order to help the search engine tag the aliases according to the order of associations. The association orders will automatically be discovered by creating an anchor texts-based co-occurrence graph between name and aliases. Ranking support vector machine (SVM) will be used to create connections between name and aliases in the graph by performing ranking on anchor texts-based co-occurrence measures. The hop distances between nodes in the graph will lead to have the associations between name and aliases. The hop distances will be found by mining the graph. The proposed method will outperform previously proposed methods, achieving substantial growth on recall and MRR.

NETWORK + SECURITY

2012

50 A group key agreement protocol based on stability & power using a Elliptical curve cryptography

In mobile ad hoc networks, the security is the main constraint in message transmission. For secure group basedmessage transmission, we must share the key among users so that we can make the transmission as secure. This paper addresses an interesting security problem in mobile ad hoc networks that is dynamic group key agreement for key establishment. For secure communication, a group key must be shared by all group members. This group key should be updated when the existing group members are leaving the network or new members are entering into the existing network. In this paper, we propose a efficient group key agreement protocol called Key Agreement protocol based on Stability and Power (KASP). Here the idea is to split a large group into several subgroups, each maintaining its subgroup keys to manage the subgroup and managing many subgroups using Elliptic Curve Diffie-Hellman (ECDH) key agreement algorithm. In KASP, we develop two protocols namely, Subgroup Key Generation(SKG) and Group Key Generation(GKG) based on ECDH for subgroups and outer groups respectively. These subgroup keys and group keys should be changed when there are membership changes (such as when the current member leaves or the new member joins). By introducing group-based approach, messages and key updates will be limited within subgroup and outer group. Thus computation load is distributed to many mobile ad hoc nodes. Both theoretical and practical results show that this KASP, a new efficient group key agreement protocol performs well for the key establishment problem in ad hoc network in terms of efficiency and security

NETWORK + SECURITY

2012

51    An Abuse-Free Fair Contract-Signing Protocol Based on the RSA Signature

Afair contract-signing protocol allows two potentially mistrusted parities to exchange their commitments (i.e., digital signatures) to an agreed contract over the Internet in a fair way, so that either each of them obtains the other’s signature, or neither party does. Based on the RSA signature scheme, a new digital contract- signing protocol is proposed in this paper. Like the existing RSA-based solutions for the same problem, our protocol is not only fair, but also optimistic, since the trusted third party is involved only in the situations where one party is cheating or the communication channel is interrupted. Furthermore, the proposed protocol satisfies a new property— abuse-freeness. That is, if the protocol is executed unsuccessfully, none of the two parties can show the validity of intermediate results to others. Technical details are provided to analyze the security and performance of the proposed protocol. In summary, we present the first abuse-free fair contractsigning protocol based on the RSA signature, and show that it is both secure and efficient.

NETWORK + SECURITY

2012

52 Providing Witness Anonymity Under Peer-to-Peer Settings

In this paper, we introduce the concept of witness anonymity for peer-to-peer systems, as well as other systems with the peer-to-peer nature. Witness anonymity combines the seemingly conflicting requirements of anonymity (for honest peers who report on the misbehavior of other peers) and accountability (for malicious peers that attempt to misuse the anonymity feature to slander honest peers). We propose the Secure Deep Throat (SDT) protocol to provide anonymity for the witnesses of malicious or selfish behavior to enable such peers to report on this behavior without fear of retaliation. On the other hand, in SDT, the misuse of anonymity is restrained in such a way that any malicious peer attempting to send multiple claims against the same innocent peer for the same reason (i.e., the same misbehavior type) can be identified. We also describe how SDT can be used in two modes. The active mode can be used in scenarios with real-time requirements, e.g., detecting and preventing the propagation of peer-to-peer worms, whereas the passive mode is suitable for scenarios without strict real-time requirements, e.g., query-based reputation systems. We analyze the security and overhead of SDT, and present countermeasures that can be used to mitigate various attacks on the protocol. Moreover, we show how SDT can be easily integrated with existing protocols/mechanisms with a few examples. Our analysis shows that the communication, storage, and computation overheads of SDT are acceptable in peer-to-peer systems .

NETWORK + SECURITY

2012

Page 18: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

53 Authenticated Group Key Transfer Protocol Based on Secret Sharing

Key transfer protocols rely on a mutually trusted key generation center (KGC) to select session keys and transport session keys to all communication entities secretly. Most often, KGC encrypts session keys under another secret key shared with each entity during registration. In this paper, we propose an authenticated key transfer protocol based on secret sharing scheme that KGC can broadcast group key information to all group members at once and only authorized group members can recover the group key; but unauthorized users cannot recover the group key. The confidentiality of this transformation is information theoretically secure. We also provide authentication for transporting this group key. Goals and security threats of our proposed group key transfer protocol will be analyzed in detail.

NETWORK + SECURITY

2012

54 MABS Multicast Authentication Based on Batch Signature

Conventional block-based multicast authentication schemes overlook the heterogeneity of receivers by letting the sender choose the block size, divide a multicast stream into blocks, associate each block with a signature, and spread the effect of the signature across all the packets in the block through hash graphs or coding algorithms. The correlation among packets makes them vulnerable to packet loss, which is inherent in the Internet and wireless networks. Moreover, the lack of Denial of Service (DOS) resilience renders most of them vulnerable to packet injection in hostile environments.

NETWORK + SECURITY

2012

DISTRIBUTED AND PARALLEL55 Distribut

ed Private Key Generation for Identity Based Cryptosystems in Ad Hoc Networks

Identity Based Cryptography (IBC) has the advantage that no public key certification is needed when used in a mobile ad hoc network (MANET). This is especially useful when bi-directional channels do not exist in a MANET. However, IBC normally needs a centralized server for issuing private keys for different identities. We give a protocol distributing this task among all users, thus eliminating the need of a centralized server in IBC for use in MANETs. Distributing the public key certification task among users has been considered Through the application of Feldman’s verifiable secret sharing scheme a construction for sharing the task of the IBC-PKG among all users is given. More specifically, the main contribution of this article is that a distributed PKG implementation for Boneh- Franklin’s IBE is presented, which allows the function of a trusted private key generator (needed for IBC) to be securely distributed among all the participating nodes in a MANET.

DISTRIBUTED AND PARALLEL

2012

56 DRINA: A Lightweight and Reliable Routing Approach for in-Network Aggregation in Wireless Sensor Networks

Large scale dense wireless sensor networks (WSNs) will be increasingly deployed in different classes of applications for accurate monitoring. Due to the high density of nodes in these networks, it is likely that redundant data will be detected by nearby nodes when sensing an event. Since energy conservation is a key issue in WSNs, data fusion and aggregation should be exploited in order to save energy. In this case, redundant data can be aggregated at intermediate nodes reducing the size and number of exchanged messages and, thus, decreasing communication costs and energy consumption. In this work we propose a novel Data Routing for In-Network Aggregation, called DRINA, that has some key aspects such as a reduced number of messages for setting up a routing tree, maximized number of overlapping routes, high aggregation rate, and reliable data aggregation and transmission. The proposed DRINA algorithm was extensively compared to two other known solutions: the In FRA and SPT algorithms. Our results indicate clearly that the routing tree built by DRINA provides the best aggregation quality when compared to these other algorithms. The obtained results show that our proposed solution outperforms these solutions in different scenarios and in different key aspects required by WSNs.

DISTRIBUTED AND PARALLEL

2012

57    The Three-Tier Security Scheme in Wireless Sensor Networks with Mobile Sinks

Mobile sinks (MSs) are vital in many wireless sensor network (WSN) applications for efficient data accumulation, localized sensor reprogramming, and for distinguishing and revoking compromised sensors. However, in sensor networks that make use of the existing key pre distribution schemes for pair wise key establishment and authentication between sensor nodes and mobile sinks, the employment of mobile sinks for data collection elevates a new security challenge: in the basic probabilistic and q-composite key pre distribution schemes, an attacker can easily obtain a large number of keys by capturing a small fraction of nodes, and hence, can gain control of the network by deploying a replicated mobile sink preloaded with some compromised keys. This article describes a three-tier general framework that permits the use of any pair wise key pre distribution scheme as its basic component. The new framework requires two separate key pools, one for the mobile sink to access the network, and one for pair wise key establishment between the sensors. To further reduce the damages caused by stationary access node replication attacks, we have strengthened the authentication mechanism between the sensor and the stationary access node in the proposed framework. Through detailed analysis, we show that our security framework has a higher network resilience to a mobile sink replication attack as compared to the polynomial pool-based scheme.

DISTRIBUTED AND PARALLEL

2012

Page 19: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

58   A 3N Approach to Network Control and Management

As the network technology and applications continue to evolve, computer networks become more and more important. However, network users can attack the network infrastructure (such as domain name service and routing services, etc.). The networks can not provide the minimum required quality of service for control. Network situation can not be aware in a timely manner. And network maintaining and upgrading are not easy. We argue that one root cause of these problems is that control, management and forwarding function are intertwined tightly. We advocate a complete loosing of the functionality and propose an extreme design point that we call “3N”, after the architecture’s three separated networks: forwarding network, control network and management network. Accordingly, we introduce four network entities: forwarder, controller, manager and separators. In the 3N architecture, the forwarding network mainly forwards packets at the behest of the control network and the management network; the control network mainly perform route computation for the data network; and the management network mainly learn about the situation of the data network and distribute policies and configurations, and the three networks working together to consist a efficient network system. In this paper we presented a high level overview of 3N architecture and some research considerations in its realization. We think the 3N architecture is helpful to improve network security, availability, manageability, scalability and so on.

DISTRIBUTED AND PARALLEL

2012

59 Towards Practical Communication in Byzantine-Resistant DHTs

There are several analytical results on distributed hash tables (DHTs) that can tolerate Byzantine faults. Unfortunately, in such systems, operations such as data retrieval and message sending incur significant communication costs. For example, a simple scheme used in many Byzantine fault-tolerant DHT constructions of nodes requires messages; this is likely impractical for real-world applications. The previous best known message complexity is in expectation. However, the corresponding protocol suffers from prohibitive costs owing to hidden constants in the asymptotic notation and setup costs. In this paper, we focus on reducing the communication costs against a computationally bounded adversary. We employ threshold cryptography and distributed key generation to define two protocols, both of which are more efficient than existing solutions. In comparison, our first protocol is deterministic with message complexity, and our second protocol is randomized with expected message complexity. Furthermore, both the hidden constants and setup costs for our protocols are small, and no trusted third party is required. Finally, we present results from microbenchmarks conducted over PlanetLab showing that our protocols are practical for deployment under significant levels of churn and adversarial behavior.

DISTRIBUTED AND PARALLEL

2012

60 An Efficient and Adaptive Decentralized File Replication Algorithm in P2P File Sharing Systems

In peer-to-peer file sharing systems, file replication technology is widely used to reduce hot spots and improve file query efficiency. Most current file replication methods replicate files in all nodes or two end points on a client-server query path. However, these methods either have low effectiveness or come at a cost of high overhead. File replication in server side enhances replica hit rate, hence, lookup efficiency but produces overloaded nodes and cannot significantly reduce query path length. File replication in client side could greatly reduce query path length, but cannot guarantee high replica hit rate to fully utilize replicas. Though replication along query path solves these problems, it comes at a high cost of overhead due to more replicas and produces underutilized replicas. This paper presents an Efficient and Adaptive Decentralized (EAD) file replication algorithm that achieves high query efficiency and high replica utilization at a significantly low cost. EAD enhances the utilization of file replicas by selecting query traffic hubs and frequent requesters as replica nodes, and dynamically adapting to no uniform and time-varying file popularity and node interest. Unlike current methods, EAD creates and deletes replicas in a decentralized self-adaptive manner while guarantees high replica utilization. Theoretical analysis shows the high performance of EAD. Simulation results demonstrate the efficiency and effectiveness of EAD in comparison with other approaches in both static and dynamic environments. It dramatically reduces the overhead of file rep

DISTRIBUTED AND PARALLEL

2012

61   Rumor Riding: Anonym zing Unstructured Peer-to-Peer Systems

Although anonymizing Peer-to-Peer (P2P) systems often incurs extra traffic costs, many systems try to mask the identities of their users for privacy considerations. Existing anonymity approaches are mainly path-based: peers have to pre-construct an anonymous path before transmission. The overhead of maintaining and updating such paths is significantly high. We propose Rumor Riding (RR), a lightweight and non-path-based mutual anonymity protocol for decentralized P2P systems. Employing a random walk mechanism, RR takes advantage of lower overhead by mainly using the symmetric cryptographic algorithm.

DISTRIBUTED AND PARALLEL

2012

62 FDAC: Toward Fine-grained Distributed Data Access Control in Wireless Sensor Networks

 Distributed sensor data storage and retrieval has gained increasing popularity in recent years for supporting various applications. While distributed architecture enjoys a more robust and fault-tolerant wireless sensor network (WSN), such architecture also poses a number of security challenges especially when applied in mission-critical applications such as battle field and e-healthcare. First, as sensor data are stored andmaintained by individual sensors and unattended sensors are easily subject to strong attacks such as physical compromise, it is significantly harder to ensure data security. Second, in many mission-critical applications, fine-grained data access control is a must as illegal access to the sensitive data may cause disastrous result and/or prohibited by the law. Last but not least, sensors usually are resource-scarce, which limits the direct adoption of expensive cryptographic primitives. To address the above challenges, we propose in this paper a distributed data access control scheme that is able to fulfill fine-grained access control over sensor data and is resilient against strong attacks such as sensor compromise and user colluding. The proposed scheme exploits a novel cryptographic primitive called attribute-based encryption (ABE), tailors, and adapts it for WSNs with respect to both performance and security requirements. The feasibility of the scheme is demonstrated by experiments on real sensor platforms. To our best knowledge, this paper is the first to realize distributed fine-grained data access control for WSNs.

DISTRIBUTED AND PARALLEL

2012

Page 20: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

63 Consistent Monitoring System for Parallel and Distributed Systems

This paper proposes to build overlays that help in the monitoring of long-term availability histories of hosts, with a focus on large-scale distributed settings where hosts may be selfish or colluding (but not malicious). Concretely, we focus on the important problems of selection and discovery of such an availability monitoring overlay. We motivate six significant goals for these problems—the first three goals are consistency, verifiability, and randomness in selecting the availability monitors of nodes, so as to be Probabilistically resilient to selfish and colluding nodes. The next three goals are discoverability, load balancing, and scalability in finding these monitors. We then present AVMON, an availability monitoring overlay that is the first system to satisfy all the above six requirements. The core algorithmic contribution of this paper is a range of protocols for discovering the availability monitoring overlay in a scalable and efficient manner, given any arbitrary monitor selection scheme that is consistent and verifiable. We mathematically analyze the performance of AVMON’s discovery protocols with respect to scalability and discovery time of monitors. Most interestingly, we are able to derive optimal variants of AVMON, with the aim of minimizing memory, bandwidth, computation, and discovery time of monitors (or a subset of these metrics). Our analysis indicates that these optimal variants are also practically feasible. Finally, we perform extensive experimental evaluations of AVMON by using three types of availability traces—synthetic, from PlanetLab, and from a peer-to-peer system (Overnet). Our results demonstrate that AVMON would work well in a wide variety of distributed systems.

DISTRIBUTED AND PARALLEL

2012

VANETS78 Acknowl

edgment-Based Broadcast Protocol for Reliable and Efficient Data Dissemination in Vehicular Ad Hoc Networks

We propose a broadcast algorithm suitable for a wide range of vehicular scenarios, which only employs local information acquired via periodic beacon messages, containing acknowledgments of the circulated broadcast messages. Each vehicle decides whether it belongs to a connected dominating set (CDS). Vehicles in the CDS use a shorter waiting period before possible retransmission. At time-out expiration, a vehicle retransmits if it is aware of at least one neighbor in need of the message. To address intermittent connectivity and appearance of new neighbors, the evaluation timer can be restarted. Our algorithm resolves propagation at road intersections without any need to even recognize intersections. It is inherently adaptable to different mobility regimes, without the need to classify network or vehicle speeds. In a thorough simulation-based performance evaluation, our algorithm is shown to provide higher reliability and message efficiency than existing approaches for non safety applications.

VANET 2012

64 Acknowledgment-Based Broadcast Protocol for Reliable and Efficient Data Dissemination in Vehicular Ad Hoc Networks

Reliable Re-encryption in Unreliable Clouds Abstract—A key approach to secure cloud computing is for the data owner to store encrypted data in the cloud, and issue decryption keys to authorized users. Then, when a user is revoked, the data owner will issue re-encryption commands to the cloud to re-encrypt the data, to prevent the revoked user from decrypting the data, and to generate new decryption keys to valid users, so that they can continue to access the data. However, since a cloud computing environment is comprised of many cloud servers, such commands may not be received and executed by all of the cloud servers due to unreliable network communications. In this paper, we solve this problem by proposing a time based re-encryption scheme, which enables the cloud servers to automatically re-encrypt data based on their internal clocks. Our solution is built on top of a new encryption scheme,attribute based encryption, to allow Fine-grain access control, and does not require perfect clock synchronization for correctness.

VANET 2012

Mobile computing

Page 21: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

65 1.   Energy Efficient Cluster Based Routing Protocol for Wireless Sensor Networks

Wireless Sensor Networks consist of hundreds of tiny, inexpensive, resource constrained sensor nodes. Routing is a challenging task in such environment mainly due to the unique constraints the wireless sensor networks suffer from. Highly dynamic topology of wireless sensor networks is another challenge due to which the existing route becomes unavailable very frequently. Energy efficiency of the protocols and algorithms being a major design goal in sensor network setup, in this paper a novel energy efficient routing protocol is proposed. The proposed protocol is hierarchical and cluster based. In this protocol, the Base Station selects the Cluster Heads (CH). The selection procedure is carried out in two stages. In the first stage, all candidate nodes for becoming CH are listed, based on the parameters like relative distance of the candidate node from the Base Station, remaining energy level, probable number of neighboring sensor nodes the candidate node can have, and the number of times the candidate node has already become the Cluster Head. The Cluster Head generates two schedules for the cluster members namely Sleep and TDMA based Transmit. The data transmission inside the cluster and from the Cluster Head to the Base Station takes place in a multi-hop fashion. The current session ends when the energy level of any one of the current Cluster Heads reduces to half of its initial energy amount. The simulation results of the proposed protocol are also reported. Future scopes of this work are outlined

Mobile computing

2012

66  SDSM: A Secure Data Service Mechanism in Mobile Cloud Computing

To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue thatmost of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.

Mobile Computing

2013

67      DSS: Distributed SINR-Based Scheduling Algorithm for Multihop Wireless Networks

The problem of developing distributed scheduling algorithmsfor high throughput in multihop wireless networks has been extensively studied in recent years. The design of adistributed low-complexity scheduling algorithm becomes even more challenging when taking into account a physical interference model, which requires the SINR at a receiver to be checked when making scheduling decisions. To do so, we need to check whether a transmission failure is caused by interference due to simultaneous transmissions from distant nodes. In this paper, we propose a scheduling algorithmunder a physical interference model, which is amenable todistributed implementation with 802.11 CSMA technologies. The proposed scheduling algorithm is shown to achieve throughput optimality. We present two variations of thealgorithm to enhance the delay performance and to reduce the control overhead, respectively, while retaining throughput optimality

Mobile computing

2013

68 Channel-Aware Routing in MANETs With Route Handoff (AOMDV)

In wireless mobile ad hoc networks (MANETs), packet transmission is impaired by radio link fluctuations. This paper proposes a novel channel adaptive routing protocol which extends the Ad hoc On-Demand Multipath Distance Vector (AOMDV) routing protocol to accommodate channel fading. Specifically, the proposed Channel-Aware AOMDV (CA-AOMDV) uses the channel average non fading duration as a routing metric to select stable links for path discovery, and applies a preemptive handoff strategy to maintain reliable connections by exploiting channel state information. Using the same information, paths can be reused when they become available again, rather than being discarded. We provide new theoretical results for the downtime and lifetime of a live-die-live multiple path system, as well as detailed theoretical expressions for common network performance measures, providing useful insights into the differences in performance between CA-AOMDV and AOMDV. Simulation and theoretical results show that CA-AOMDV has greatly improved network performance over AOMDV

Mobile computing

2012

69 A Node-Disjoint Multipath Routing Method Based on AODV Protocol for MANETs

Frequent link failures are caused in mobile ad-hoc networks due to node’s mobility and use of unreliable wireless channels for data transmission. Due to this, multipath routing protocols become an important research issue. In this paper, we propose and implement a node-disjoint multipath routing method based on AODV protocol. The main goal of the proposed method is to determine all available node-disjoint routes from source to destination with minimum routing control overhead. With the proposed approach, as soon as the First route for destination is determined, the source starts data transmission. All the other backup routes, if available, are determined concurrently with the data transmission through the First route. This minimizes the initial delay caused because data transmission is started as soon as First route is discovered. We also propose three different route maintenance methods. All the proposed route maintenance methods are used with the proposed route discovery process for performance evaluation. The results obtained through various simulations show the effectiveness of our proposed methods in terms of route availability, control overhead, average end-to-end delay and packet delivery ratio.

Mobile computing

2012

70 An energy-efficient data storage scheme

Decentralized attribute-based encryption (ABE) is a variant of a multi-authority ABE scheme where each authority can issue secret keys to the user independently without any cooperation and a central authority. This is in contrast to the previous constructions, where multiple authorities must be online and setup the system interactively, which is impractical. Hence, it is clear that a decentralized ABE scheme eliminates the heavy communication cost and the need for collaborative computation in the setup stage. Furthermore, every authority can join or leave the system freely without the necessity of re-initializing the system. In contemporary multi-

Mobile computing

2012

Page 22: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

in wireless sensor networks

authority ABE schemes, a user’s secret keys from different authorities must be tied to his global identi?er (GID) to resist the collusion attack. However, this will compromise the user’s privacy. Multiple authorities can collaborate to trace the user by his GID, collect his attributes, then impersonate him. Therefore, constructing a decentralized ABE scheme with privacy-preserving remains a challenging research problem. In this paper, we propose a privacy-preserving decentralized key-policy ABE scheme where each authority can issue secret keys to a user independently without knowing anything about his GID. Therefore, even if multiple authorities are corrupted, they can not collect the user’s attributes by tracing his GID. Notably, our scheme only requires standard complexity assumptions (e.g., decisional bilinear Diffie-Hellman) and does not require any cooperation between the multiple authorities, in contrast to the previous comparable scheme that requires non-standard complexity assumptions (e.g., q-decisional Difie-Hellman inversion) and interactions among multiple authorities. To the best of our knowledge, it is the First decentralized ABE scheme with privacy-preserving based on standard complexity assumptions.

71    The Black-hole node attack in MANET

present two possible solutions. The first is to find more than one route to the destination. The second is to exploit the packet sequence number included in any packet header. Computer simulation shows that in comparison to the original ad hoc ondemand distance vector (AODV) routing scheme, the second solution can verify 75% to 98% of the route to the destination depending on the pause time at a minimum cost of the delay in the networks. The main objective of this paper is to analyze black hole attack in MANET and its solutions.

Mobile computing

2012

72 Fast Data Collection in Tree-Based Wireless Sensor Networks

We investigate the following fundamental question - how fast can information be collected from a wireless sensor network organized as tree? To address this, we explore and evaluate a number of different techniques using realistic simulation models under the many-to-one communication paradigm known as convergecast. We first consider time scheduling on a single frequency channel with the aim of minimizing the number of time slots required (schedule length) to complete a convergecast. Next, we combine scheduling with transmission power control to mitigate the effects of interference, and show that while power control helps in reducing the schedule length under a single frequency, scheduling transmissions using multiple frequencies is more efficient. We give lower bounds on the schedule length when interference is completely eliminated, and propose algorithms that achieve these bounds. We also evaluate the performance of various channel assignment methods and find empirically that for moderate size networks of about 100 nodes, the use of multi-frequency scheduling can suffice to eliminate most of the interference. Then, the data collection rate no longer remains limited by interference but by the topology of the routing tree. To this end, we construct degree-constrained spanning trees and capacitated minimal spanning trees, and show significant improvement in scheduling performance over different deployment densities. Lastly, we evaluate the impact of different interference and channel models on the schedule length.

Mobile computing

2012

73 Protecting Location Privacy in Sensor Networks against a Global Eavesdropper

While many protocols for sensor network security provide confidentiality for the content of messages, contextual information usually remains exposed. Such contextual information can be exploited by an adversary to derive sensitive information such as the locations of monitored objects and data sinks in the field. Attacks on these components can significantly undermine any network application. Existing techniques defend the leakage of location information from a limited adversary who can only observe network traffic in a small region. However, a stronger adversary, the global eavesdropper, is realistic and can defeat these existing techniques. This paper first formalizes the location privacy issues in sensor networks under this strong adversary model and computes a lower bound on the communication overhead needed for achieving a given level of location privacy. The paper then proposes two techniques to provide location privacy to monitored objects (source-location privacy)—periodic collection and source simulation—and two techniques to provide location privacy to data sinks (sink-location privacy)—sink simulation and backbone flooding. These techniques provide trade-offs between privacy, communication cost, and latency. Through analysis and simulation, we demonstrate that the proposed techniques are efficient and effective for source and sink-location privacy in sensor networks.

Mobile computing

2012

74 Fast Detection of Mobile Replica Node Attacks in Wireless Sensor Networks Using Sequential Hypothesis Testing

Due to the unattended nature of wireless sensor networks, an adversary can capture and compromise sensor nodes, generate replicas of those nodes, and mount a variety of attacks with the replicas he injects into the network. These attacks are dangerous because they allow the attacker to leverage the compromise of a few nodes to exert control over much of the network. Several replica node detection schemes in the literature have been proposed to defend against these attacks in static sensor networks. These approaches rely on fixed sensor locations and hence do not work in mobile sensor networks, where sensors are expected to move. In this work, we propose a fast and effective mobile replica node detection scheme using the Sequential Probability Ratio Test. To the best of our knowledge, this is the first work to tackle the problem of replica node attacks in mobile sensor networks. We show analytically and through simulation experiments that our schemes achieve effective and robust replica detection capability with reasonable overheads.

Mobile computing

2012

Page 23: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

75 Secure Data Collection in Wireless Sensor Networks Using Randomized Dispersive Routes

Compromised-node and denial-of-service are two key attacks in wireless sensor networks (WSNs). In this paper, we study data delivery mechanisms that can with high probability circumvent black holes formed by these attacks. We argue that classic multipath routing approaches are vulnerable to such attacks, mainly due to their deterministic nature. So once the adversary acquires the routing algorithm, it can compute the same routes known to the source, hence making all information sent over these routes vulnerable to its attacks. In this paper, we develop mechanisms that generate randomized multi-path routes. Under our designs, the routes taken by the “shares” of different packets change over time. So even if the routing algorithm becomes known to the adversary, the adversary still cannot pinpoint the routes traversed by each packet. Besides randomness, the generated routes are also highly dispersive and energy efficient, making them quite capable of circumventing black holes. We analytically investigate the security and energy performance of the proposed schemes. We also formulate an optimization problem to minimize the end-to-end energy consumption under given security constraints. Extensive simulations are conducted to verify the validity of our mechanisms.

Mobile computing

2012

76 On demand Temporary Route Recovery for Frequent Link Failures in Adhoc Networks

  Mobile computing

2012

77 Secure Distance-Based Localization In The Presence Of Cheating Beacon Nodes

Localization in the presence of malicious beacon nodes is an important problem in wireless networks. Although significant progress has been made on this problem, some fundamental theoretical questions still remain unanswered: in the presence of malicious beacon nodes, what are the necessary and sufficient conditions to guarantee a bounded error during 2-dimensional location estimation? Under these necessary and sufficient conditions, what class of localization algorithms can provide that error bound? In this paper, we try to answer these questions. Specifically, we show that, when the number of malicious beacons is greater than or equal to some threshold, there is no localization algorithm that can have a bounded error. Furthermore, when the number of malicious beacons is below that threshold, we identify a class of localization algorithms that can ensure that the localization error is bounded. We also outline two algorithms in this class, one of which is guaranteed to finish in polynomial time (in the number of beacons providing information) in the worst case, while the other is based on a heuristic and is practically efficient. For completeness, we also extend the above results to the 3-dimensional case. Experimental results demonstrate that our solution has very good localization accuracy and computational efficiency

Mobile computing

2012

Wireless Communication79 Power

Aware Ad Hoc On-demand Distance Vector (PAAODV) Routing for MANETS

Compromised-node and denial-of-service are two key attacks in wireless sensor networks (WSNs). In this paper, we study data delivery mechanisms that can with high probability circumvent black holes formed by these attacks. We argue that classic multipath routing approaches are vulnerable to such attacks, mainly due to their deterministic nature. So once the adversary acquires the routing algorithm, it can compute the same routes known to the source, hence making all information sent over these routes vulnerable to its attacks. In this paper, we develop mechanisms that generate randomized multi-path routes. Under our designs, the routes taken by the “shares” of different packets change over time. So even if the routing algorithm becomes known to the adversary, the adversary still cannot pinpoint the routes traversed by each packet. Besides randomness, the generated routes are also highly dispersive and energy efficient, making them quite capable of circumventing black holes. We analytically investigate the security and energy performance of the proposed schemes. We also formulate an optimization problem to minimize the end-to-end energy consumption under given security constraints. Extensive simulations are conducted to verify the validity of our mechanisms

Wireless Communication

2012

80   Power Management for Throughput Enhancement in Wireless Ad-Hoc Networks

In this paper we introduce the notion of power management within the context of wireless ad-hoc networks. More specifically, we investigate the effects of using different transmit powers on the average power consumption and end-to-end network throughput in a wireless ad-hoc environment. This power management approach would help in reducing the system power consumption and hence prolonging the battery life of mobile nodes. Furthermore, it improves the end-to-end network throughput as compared to other ad-hoc networks in which all mobile nodes use the same transmit power. The improvement is due to the achievement of a tradeoff between minimizing interference ranges, reduction in the average number of hops to reach a destination, the probability of having isolated clusters, and the average number of transmissions (including retransmissions due to collisions). The protocols would first dynamically determine an optimal connectivity range wherein they adapt their transmit powers so as to only reach a subset of the nodes in the network. The connectivity range would then be dynamically changed in a distributed manner so as to achieve the near optimal throughput. Minimal power routing is used to further enhance performance.

Wireless Communication

2012

81    A   Wireless 2012

Page 24: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

Secure and Power Efficient Routing Scheme for Ad Hoc Networks

Communication

Wireless Sensor Networks82   A

Secure Web Service-based Platform for Wireless Sensor Network Management and Interrogation

A Wireless Sensor Network (WSN) is composed of small, low cost and low energy consumption devices called sensors. Those sensors are deployed in a monitored area. They capture measurements related to the monitored phenomenon (temperature, humidity...) and send them through a multi-hop routing to a sink node that delivers them to a Base Station for use and decision making. WSN are used in several fields ranging from military applications to civilian ones, for security, home automation and health care... Up to now, most of the works focused on designing routing protocols to address energy consumption issue, fault tolerance and security. In this paper, we address the issue of secure management and interrogation of WSN through Internet mainly. In our work, we designed and implemented a generic approach based on Web Services that builds a standardized interface between a WSN and external networks and applications. Our approach uses a gateway that offers a synthesis of Web Services offered by the WSN assuring its interrogation and management. Furthermore, uthentication, Authorization and Accounting mechanism has been plemented to provide security services and a billing system for WSN interrogation. We designed our architecture as a generic framework. Then, we instantiated it for two use cases. Furthermore, we designed, implemented and tested Directed Service Oriented Diffusion (DSOD), a Service Oriented routing protocol for WSN.

Wireless Sensor Networks

2012

83     Supporting Efficient and Scalable Multicasting over Mobile Ad Hoc Networks

Group communications are important in Mobile Ad hoc Networks (MANET). Multicast is an efficient method for implementing group communications. However, it is challenging to implement efficient and scalable multicast in MANET due to the difficulty in group membership management and multicast packet forwarding over a dynamic topology.We propose a novel Efficient Geographic Multicast Protocol (EGMP). EGMP uses a virtual-zone-based structure to implement scalable and efficient group membership management. A network-wide zone-based bi-directional tree is constructed to achieve more efficient membership management and multicast delivery. The position information is used to guide the zone structure building, multicast tree construction and multicast packet forwarding, which efficiently reduces the overhead for route searching and tree structure maintenance. Several strategies have been proposed to further improve the efficiency of the protocol, for example, introducing the concept of zone depth for building an optimal tree structure and integrating the location search of group members with the hierarchical group membership management. Finally, we design a scheme to handle empty zone problem faced by most routing protocols using a zone structure. The scalability and the efficiency of EGMP are evaluated through simulations and quantitative analysis. Our simulation results demonstrate that EGMP has high packet delivery ratio, and low control overhead and multicast group joining delay under all test scenarios, and is scalable to both group size and network size. Compared to Scalable Position-Based Multicast (SPBM) [20], EGMP has significantly lower control overhead, data transmission overhead, and multicast group joining delay.

Wireless Sensor Networks

2012

84 A Novel Indirect Trust based Link State Routing Scheme using a Robust Route Trust Method for Wireless Sensor Networks

Integration of trust in routing mechanisms already prevalent in Wireless Sensor Networks (WSN) has become an interesting research area of late. Several methods exist for the assignment of trust to the nodes present in a WSN. However the real challenge lies in proper integration of this trust to an existing routing protocol for the synthesis of a trust-aware routing algorithm. In this paper we try to take the help of Geometrical Mean based indirect trust evaluation mechanism for calculation of trust of individual nodes and thereby use the calculated trusts for determination of the different route trusts (RTs). We present a link state routing protocol based only on these indirect trusts which forms the routes and finds the best trustworthy route among them by comparing the values of all the calculated route trusts as for each route present in the network. We have developed three algorithms related to this and have shown their merits. Finally, we compare our work with similar trust integrated routing protocols and show its advantages over them.

Wireless Sensor Networks

2012

Page 25: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

85   A Hybrid Key Management Protocol forWireless Sensor Networks

Wireless Sensor Networks (WSNs) are wireless ad-hoc networks of tiny battery-operated wireless sensors. They are usually deployed in unsecured, open, and, harsh environments where it is difficult for humans to perform continuous monitoring. Due to its nature of deployment it is very crucial to provide security mechanisms for authenticating data. Key management is a pre-requisite for any security mechanism. Due to memory, computation, and communication constraints of sensor nodes, distribution and management of key in WSNs is a challenging task. Because of its lightweight feature, symmetric crypto-systems are a natural choice for key management in WSNs. However, they often fail to provide a good trade-off between resilience and storage. On the other hand, Public Key Infrastructure (PKI) is infeasible in WSNs because of its continuous availability of trusted third party and heavy computational requirements for certificate verification. Pairing-Based Cryptography (PBC) has paved a way for how parties can agree on keys without any interaction. It has relaxed the requirement of expensive certificate verification on PKI system. In this paper, we propose a new hybrid ID based non-interactive key management protocol for WSNs, which leverages the benefits from both symmetric key based cryptosystems and PBC by combining them together. The proposed protocol is very flexible and suits many applications. We also provide mechanisms for key refresh when the network changes.

Wireless Sensor Networks

2012

86 PowerNap: An Energy Efficient MAC Layer for Random Routing in Wireless Sensor Networks

Idle-listening is the biggest challenge for energy- efficiency and longevity of multihop wireless sensor network (WSN) deployments. While existing coordinated sleep/wakeup scheduling protocols eliminate idle-listening for simple traffic pat- terns, they are unsuitable to handle the complex traffic patterns of the random routing protocols. We present a novel coordinated sleep/wakeup protocol POWERNAP , which avoids the overhead of distributing complex, large sleep/wakeup scheduling information to the nodes. POWERNAP piggybacks onto the relayed data packets the seed of the pseudo-random generator that encodes the scheduling information, and enables any recipient/snooper to calculate its sleep/wakeup schedule from this seed. In essence, POWERNAP trades off doing extra computation in order to avoid expensive control packet transmissions. We show through simula- tions and real implementation on Telos B motes that POWERNAP eliminates the idle-listening problem efficiently and achieves self-stabilizing, low-latency

Wireless Sensor Networks

2012

89    An Energy-Efficient Data Fusion Protocol for Wireless Sensor Network

It is a critical consideration to collect and fuse sensed information in an energy efficient manner for obtaining a long lifetime of the sensor network. Based on our findings that the conventional methods of direct transmission, shortest path routing, and Dempster- Shafer tool may not be optimal for data fusion of sensor networks, we propose LEECF (Low-Energy Event Centric Fusion), a event-centric-based protocol that utilizes the centric sensor node to aggregate the event data among the triggered sensors in a short delay. LEECF incorporates a fast information fusion into the routing protocol to reduce the amount of information that must be transmitted to the sink and the time complexity of fusion computation of fusion center. Simulations show that LEECF can decrease the energy and fusion time significantly compared with conventional routing protocols and D-S evidence theory with the increased number of sensors.

Wireless Sensor Networks

2012

Datamining/Webmining90  

Intelligent teaching models for STEM related careers using a service-oriented architecture and management science

The development of World Wide Web (WWW) a little more than a decade ago has caused an information explosion that needs an Intelligent Web (IW) for users to easily control their information and commercial needs. Therefore, engineering schools have offered a variety of IW courses to cultivate hands-on experience and training for industrial systems. In this study, Intelligent Teaching Models for STEM Related Careers Using Service-Oriented Architecture (SOA) and Management Science project course has been designed. The goal is to help students learn theoretical concepts of IW, practice advanced technical skills, and discover knowledge to solve problem. Undergraduate Science, Technology, Engineering and Mathematics (STEM) students involved in the development of innovative approaches and techniques. They are able to help solve the problems of disease misdiagnoses that medical and health care professionals experience. They co-authored and presented numerous research papers introducing the solution via scientific conferences and journals. This study provides the solution in the form of an Intelligent models using an integration of Service-Oriented Architecture and Management Science to decrease disease misdiagnosis in health care. Results show that this new course strengthens the capacity and quality of STEM undergraduate degree programs and the number of overall graduate student enrollment. It promotes a vigorous STEM academic environment and increases the number of students entering STEM careers. It expands the breadth of faculty and student involvement in research and development. It enhances and leverages the active engagement of faculty technology transfer and translational research. It improves and develops new relationships between educational institutions and research funding entities to broaden the university's research portfolio and increase funding. The proposed project course is a software engineering research methodology, an educational tool, and a teaching te- hnique is needed in future medical and health IT fields.

Datamining/Webmining

2012

Page 26: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

92 A smart communication gateway for V2I applications in Public Transport

In this paper, we present a smart gateway which improves communication between future Public Transport vehicles and their back-offices. This gateway implements context based wireless access network selection and improves end-to end communication Quality of Service (QoS) responding by this way to the requirements of the new intelligent transportation systems architecture developed in Europe through the European Bus System of the Future (EBSF)project. This Service Oriented Architecture (SOA) allows resource sharing such as Vehicle to Infrastructure communications (V2I). We show that the proposed smart gateway respond to the requirements of the EBSF project and going straightforward we demonstrate its effectiveness in an alarm scenario which is a critical scenario that the gateway has to handle.

Datamining/Webmining

2012

93 Design and Implementation of Teaching Management Systems Integrated of Vocational College Based on SOA

Teaching Management Systems is the center work of Vocational College. The paper discuss exist some problems of the current teaching management systems, according to theoretical system based on service-oriented architecture, we propose a sort of architecture and hierarchical model SOA-based about academic management system, develop a integrated platform of teaching information system on web service of J2EE-based. The phenomenon is solved about data islands of teaching management system.

Datamining/Webmining

2012

94 Horizontal Aggregations in SQL to Prepare Data Setsfor Data Mining Analysis

Preparing a data set for analysis is generally the most time consuming task in a data mining project, requiring many complex SQL queries, joining tables, and aggregating columns. Existing SQL aggregations have limitations to prepare data sets because they return one column per aggregated group. In general, a significant manual effort is required to build data sets, where a horizontal layout is required. We propose simple, yet powerful, methods to generate SQL code to return aggregated columns in a horizontal tabular layout, returning a set of numbers instead of one number per row. This new class of functions is called horizontal aggregations called horizontal aggregations. Horizontal aggregations build datasets with a horizontal denormalized layout (e.g., point-dimension, observation-variable, instance-feature), which is the standard layout required by most data mini algorithms. We propose three fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed tothe PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does not.

Datamining/Webmining

2012

95 Outsourced Similarity Search on Metric Data Assets

This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying it to the service provider for similarity queries on the transformed data. Our techniques provide interesting trade-offs between query cost and accuracy. They are then further extended to offer an intuitive privacy guarantee. Empirical studies with real data demonstrate that the techniques are capable of offering privacy while enabling efficient and accurate processing of similarity queries.

Datamining/Webmining

2012

96 Bootstrapping Ontologies for Web Services

On tologies have become the de-facto modeling tool of choice, employed in many applications and prominently in the semantic web. Nevertheless, ontology construction remains a daunting task. Ontological bootstrapping, which aims at automatically generating concepts and their relations in a given domain, is a promising technique forontology construction. Bootstrapping an ontology based on a set of predefined textual sources, such as web services, must address the problem of multiple, largely unrelated concepts. In this paper, we propose an ontology bootstrapping process for web services. We exploit the advantage that web services usually consist of both WSDL and free text descriptors. The WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF) and web context generation. Our proposed ontology bootstrapping process integrates the results of both methods and applies a third method to validate the concepts using the service free text descriptor, thereby offering a more accurate definition of ontologies. We extensively validated our bootstrapping method using a large repository of real-world web services and verified the results against existing ontologies. The experimental results indicate high precision. Furthermore, the recall versus precision comparison of the results when each method is separately implemented presents the advantage of our integrated bootstrapping approach.

Datamining/Webmining

2012

Page 27: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

97 Publishing Search Logs—A Comparative Study ofPrivacy Guarantees

Search engine companies collect the “database ofintentions,” the histories of their users' search queries. These search logs are a gold mine for researchers. Searchengine companies, however, are wary of publishing searchlogs in order not to disclose sensitive information. In this paper, we analyze algorithms for publishing frequent keywords, queries, and clicks of a search log. We first show how methods that achieve variants of k-anonymity are vulnerable to active attacks. We then demonstrate that the stronger guarantee ensured by ε-differential privacyunfortunately does not provide any utility for this problem. We then propose an algorithm ZEALOUS and show how to set its parameters to achieve (ε, δ)-probabilistic privacy. We also contrast our analysis of ZEALOUS with an analysis by Korolova et al. [17] that achieves (ε',δ')-indistinguishability. Our paper concludes with a large experimental study using real applications where we compare ZEALOUS and previous work that achieves k-anonymity in search log publishing. Our results show that ZEALOUS yields comparable utility to k-anonymity while at the same time achieving much strongerprivacy guarantees

Datamining/Webmining

2012

98 Slicing: A New Approach to Privacy Preserving Data Publishing

Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that general-ization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand,does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than gen-eralization and can be used for membership disclosure pro-tection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an ef-ficient algorithm for computing the sliced data that obey the ℓ-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.

Datamining/Webmining

2012

99 Ranking Model Adaptation for Domain-Specific Search

With the explosive emergence of vertical search domains, applying the broad-based ranking model directly to different domains is no longer desirable due to domain differences, while building a unique ranking model for each domain is both laborious for labeling data and time-consuming for training models. In this paper, we address these difficulties by proposing a regularization based algorithm called ranking adaptation SVM (RA-SVM), through which we can adapt an existing ranking model to a new domain, so that the amount of labeled data and the training cost is reduced while the performance is still guaranteed. Our algorithm only requires the prediction from the existing ranking models, rather than their internal representations or the data from auxiliary domains. In addition, we assume that documents similar in the domain-specific feature space should have consistent rankings, and add some constraints to control the margin and slack variables of RA-SVM adaptively. Finally, ranking adaptability measurement is proposed to quantitatively estimate if an existing ranking model can be adapted to a new domain. Experiments performed over Letor and two large scale datasets crawled from a commercial search engine demonstrate the applicabilities of the proposed ranking adaptation algorithms and the ranking adaptability measurement.

Datamining/Webmining

2012

100 Efficient Extended Boolean Retrieval

Extended Boolean retrieval (EBR) models were proposed nearly three decades ago, but have had little practical impact, despite their significant advantages compared to either ranked keyword or pure Boolean retrieval. In particular, EBR models produce meaningful rankings; their query model allows the representation of complex concepts in an and-or format; and they are scrutable, in that the score assigned to a document depends solely on the content of that document, unaffected by any collection statistics or other external factors. These characteristics make EBR models attractive in domains typified by medical and legal searching, where the emphasis is on iterative development of reproducible complex queries of dozens or even hundreds of terms. However, EBR is much more computationally expensive than the alternatives. We consider the implementation of the p-norm approach to EBR, and demonstrate that ideas used in the max-score and wand exact optimization techniques for ranked keywordretrieval can be adapted to allow selective bypass of documents via a low-cost screening process for this and similar retrieval models. We also propose term-independent bounds that are able to further reduce the number of score calculations for short, simple queries under the extendedBoolean retrieval model. Together, these methods yield an overall saving from 50 to 80 percent of the evaluation cost on test queries drawn from biomedical search.

Datamining/Webmining

2012

Page 28: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

102 Information Technology Skill Management Strategiesfor  Implementing New Technologies:

Managing human resources and skills for informationtechnology (IT) presents a challenging task for executives, more so when new ITs are involved. Lack of familiarity with the technology, the learning curve associated with its incorporation, and the relative paucity of skilled personnel serve to alter the strategies for management of human resources and skills. This paper presents a system dynamics approach for examining alternative strategies for skillmanagement and supporting decisions during the implementation of service-oriented architecture (SOA) in an organization. For projects involving SOA, the initial planning and implementation of the underlying architecture entail higher costs and slower delivery of initial SOA applications. The use of appropriate IT human resource management strategies for SOA projects is critical for successful SOA implementation. The complexity associated with the fluctuating demand for IT skills, coupled with the need forhighly skilled senior architects and professional developers in SOA projects, as well as inevitable delays in skillacquisition, makes this a challenging task. This paper examines the impact of alternative staffing strategies under various environmental conditions and provides guidance forstaffing decisions. Using a design science methodology, it employs system dynamics as a vehicle for allowing human resource managers to examine the impact of alternative staffing strategies under a variety of environmental conditions.

Datamining/Webmining

2012

103 SQL INJECTIONS – A hazard to web applications

With changing times, our dependence on the web applications for the fulfilment of our daily needs (like online shopping, banking, share trading, ticket booking, payment of bills etc.) has increased. Because of this, our confidential data is present in the databases of various applications on Web. The security of this myriad amount of data is a matter of major concern. In recent times, SQL Injection attacks have emerged as a major threat to database security. In this paper we define SQL Injections, illustrate how SQL Injections are performed. In addition we have also surveyed the various SQL Injection detection and Prevention tools and well-known attack methods. Finally, we have provided our solution to the problem and have assessed its performance

Datamining/Webmining

2012

104 An Exploration of Improving Collaborative Recommender Systems via User-Item Subgroups

Collaborative filtering (CF) is one of the most successful recommendation approaches. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. However we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then we propose an unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Our approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of our proposed approach.

Datamining/Webmining

2012

105 Hybrid Intrusion Detection Systems (HIDS) using Fuzzy Logic

The rapid growth of the computers that are interconnected, the crime rate has also increased and the ways to mitigate those crimes has become the important problem now. In the entire globe, organizations, higher learning institutions and governments are completely dependent on the computer networks which plays a major role in their daily operations. Hence the necessity for protecting those networked systems has also increased. Cyber crimes like compromised server, phishing and sabotage of privacy information has increased in the recent past. It need not be a massive intrusion, instead a single intrusion can result in loss of highly privileged and important data. Intusion behaviour can be classified based on different attack types. Smart intruders will not attack using a single attack, instead, they will perform the attack by combining few different attack types to deceive the detection system at the gateway. As a countermeasure, computational intelligence can be applied to the intrusion detection systems to realize the attacks, alert the administrator about the form and severity, and also to take any predetermined or adaptive measures dissuade the intrusion.

Datamining/Webmining

2012

106 Organizing User Search Histories

Users are increasingly pursuing complex task-oriented goals on the Web, such as making travel arrangements, managing finances or planning purchases. To this end, they usually break down the tasks into a few co-dependent steps and issue multiple queries around these steps repeatedly over long periods of time. To better support users in their long-term information quests on the Web, search engines keep track of their queries and clicks while searching online. In this paper, we study the problem of organizing a user’s historical queries into groups in a dynamic and automated fashion. Automatically identifying query groups is helpful for a number of different search engine components and applications, such as query suggestions, result ranking, query alterations, sessionization, and collaborative search. In our approach, we go beyond approaches that rely on textual similarity or time thresholds, and we propose a more robust approach that leverages search query logs. We experimentally study the performance of different techniques, and showcase their potential, especially when combined together.

Datamining/Webmining

2012

107 A Service Versioning Model For Personalized E- Learning

Service Versioning is a hot topic that has generated a broad range of guidance from a variety of sources. Several Web services standardization efforts are underway but none of them address the problem of service versioning.Service versioning with metadata in e-learning system makes easy search, retrieval, import and evaluate. In E learning systems, learning objects with metadata allow the learners to use quality educational contents filling their characteristic and teacher may use quality educational contents to structure their courses. A service versioning model in e-learning objects system with metadata satisfies the requirements for learning environment. In personalized e-learning systems, when more numbers of learner searching data the retrieval time is being delayed when the data volume is increasing continuously. This problem can be solved by designing and applying a proper service versioning model to personalized e-learning systems. Hence, it is proposed to design a service version

Datamining/Webmining

2012

Page 29: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

System(SOA)

model with metadata for personalized e-learning system. This proposed service versioning model satisfies some of the requirements like accessibility, interoperability, adaptability, durability and reusability in learning environment. Support multiple versions of a service isolate more expensive business behavior calls to specific versions so as to reduce impact to all of the other learners.

108 Monitoring Service Systems from a Language-Action Perspective (SOA)

Business processes are increasingly distributed and open, making them prone to failure. Monitoring is, therefore, an important concern not only for the processes themselves but also for the services that comprise these processes. We present a framework for multilevel monitoring of these service systems. It formalizes interaction protocols, policies, and commitments that account for standard and extended effects following the language-action perspective, and allows specification of goals and monitors at varied abstraction levels. We demonstrate how the framework can be implemented and evaluate it with multiple scenarios that include specifying and monitoring open-service policy commitments.

Datamining/Webmining

2012

109 SOA an Approach for Information Retrieval Using Web Services

As the exponential explosion of various contents generated on the Web, Recommendation techniques have become increasingly indispensable. Innumerable different kinds of recommendations are made on the Web every day, including movies, music, images, books recommendations, query suggestions, tags recommendations, etc. No matter what types of data sources are used for the recommendations, essentially these data sources can be modeled in the form of various types of graphs. In this paper, aiming at providing a general framework on mining Web graphs for recommendations,

Datamining/Webmining

2012

110 ODAM An Optimized Distributed Association Rule Mining Algorithm

With the explosive growth of information sources available on the World Wide Web, it has become increasingly necessary for users to utilize automated tools in find the desired information resources, and to track and analyze their usage patterns. Association rule mining is an active data mining research area. However, most ARM algorithms cater to a centralized environment. In contrast to previous ARM algorithms, ODAM is a distributed algorithm for geographically distributed data sets that reduces communication costs. Recently, as the need to mine patterns across distributed databases has grown, Distributed Association Rule Mining (D-ARM) algorithms have been developed. These algorithms, however, assume that the databases are either horizontally or vertically distributed. In the special case of databases populated from information extracted from textual data, existing D-ARM algorithms cannot discover rules based on higher-order associations between items in distributed textual documents that are neither vertically nor horizontally distributed, but rather a hybrid of the two.

Datamining/Webmining

2011

111 Truth discovery in web search engines

The world-wide web has become the most important information source for most of us. Unfortunately, there is no guarantee for the correctness of information on the web. Moreover, different web sites often provide conflicting in-formation on a subject, such as different specifications for the same product. In this paper we propose a new problem called Veracity that is conformity to truth, which studies how to find true facts from a large amount of conflicting information on many subjects that is provided by various web sites. We design a general framework for the Veracity problem, and invent an algorithm called Truth Finder, which utilizes the relationships between web sites and their information, i.e., a web site is trustworthy if it provides many pieces of true information, and a piece of information is likely to be true if it is provided by many trustworthy web sites. Our experiments show that Truth Finder successfully finds true facts among conflicting information, and identifies trustworthy web sites better than the popular search engines

Datamining/Webmining

2010

Artificial Intelligence112 1.  

Distributed management system 

Distributed management system project is useful for companies to analyze products sales by manger of company by collecting data from distributers, sales managers, representatives..etc. This application is a web based application which will help to upload sales data from different sources. Distributed management system consists of five modules where each module has separate functionalities. Manager and admin has access to all modules. The main objective of the project is to analyze the sales of the products by a manager through the details supplied by the distributors, sales managers and representatives. It is very useful for the distributors, sales managers to know about the sales of the products done by them and by others in particular area/zone.

Artificial Intelligence

NON-IEEE

113 1.   Cyber credit card system

This CreditCard Banking (CCB) allows the user to use his credit card to purchase the products. This project will validate the credit card number, security number, and expiry date, discount products. After the validation of the credit card, the amount for the purchasing products will be deducted from his bank account. The way of people buying the variety of things, using credit card changed the pattern of purchasing the products. Previously, if the user wants to purchase something, he has to carry the money to all the places. CCB gives all the information related to the credit cards and additional features of their credit cards, which makes the customer or user wisely, to decide which credit card has to be selected. This project has a powerful utility that enables users to know for the details of particular credit card in a very simple and efficient manner the option is limited to these categories of credit cards. Hence minimizing the time taken to know for the details of credit card is in the user’s choice

artificial Intelligence

NON-IEEE

Page 30: IEEE Projects 2013, Mtech projects 2013,Cloud Computing 2013,Final year engineering projects ieee based

114 Advanced secured system 

ADVANCED SECURED SYSTEM” deals with the Server based Information and Maintenance of the Server. This system is used to reduce the workload of the server. It provides mail services, chat services between the clients and it also response to the client requests. This system is designed as a middleman between the client and the server. This system provides all kinds of services to the clients like file transfer, mail, chat etc... This system is cost-effective and provides a security like firewall to the server. Though there are any software available in the market there is no popular server in terms of cost and providing services. Developed in java platform will be an advantage to this software.

Artificial Intelligence

NON-IEEE

115 Secure Multi signature generation for group communication 

In distributed systems it is sometimes necessary for users to share the power to use a cryptosystem. The system secret is divided up into shares and securely stored by the entities forming the distributed cryptosystem. The main advantage of a distributed cryptosystem is that the secret is never computed, reconstructed, or stored in a single location, making the secret more difficult to compromise. Investigations within the fields of threshold group-oriented signature schemes, threshold group signature schemes, Multisignature schemes, and Threshold-Multisignature schemes resulted in explicitly defining the properties of Threshold-Multisignature schemes.

Artificial Intelligence

NON-IEEE

116   Mobile payment service system 

A Mobile Payment Service is designed for use on a handheld device such as a PDA or mobile phone. Mobile Payment Service are optimized so as to display shopping application most effectively for small screens on portable devices and have small file sizes to accommodate the low memory capacity the low-bandwidth of wireless handheld devices. Java technology is used, brings additional benefits that provides enhanced user experience, reduced airtime requirement, and provides rich animated graphics etc.

Artificial Intelligence

NON-IEEE

117 Helthcare service system 

A health care system is the organization of people, institutions, and resources to deliver health are services to meet the health needs of target populations. There is a wide variety of health care systems around the world, with as many histories and organizational structures as there are nations. In some countries, health care system planning is distributed among market participants. In others, there is a concerted effort among governments, trade unions, charities, religious, or other co-ordinate bodies to deliver planned health care services targeted to the populations they serve. However, health care planning has been described as often evolutionary rather than revolutionary. Health care information systems are becoming more and more computerized. A huge amount of health related information needs to be stored and analyzed, and with the aid of computer systems this can be done faster and more efficiently. The dynamic health care oriented paradigm provides an alternative way of developing medical based systems.

Artificial Intelligence

NON-IEEE