7

Click here to load reader

[IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

  • Upload
    saod

  • View
    217

  • Download
    5

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

A Privacy Manager for Collaborative Working Environments

David S. Allison1,2,3,4, Miriam A. M. Capretz1, Saїd Tazi2,4

1Western University, Department of Electrical and Computer Engineering, London, Ontario, Canada {dallison, mcapretz}@uwo.ca

2CNRS, LAAS, 7 avenue du colonel Roche, F-31400 Toulouse, France 3Univ de Toulouse, LAAS, F-31400 Toulouse, France

4Univ de Toulouse, UT1-Capitole, LAAS, F-31000 Toulouse, France {dallison, tazi}@laas.fr

Abstract—The ability to collaborate has long been key to the successful completion of tasks. With the availability of current networking and computing power, the creation of Collaborative Working Environments (CWEs) has allowed for this process to occur over virtually any geographic distance. While the strength of a CWE is its ability to allow for the exchange of information freely between collaborators, this opens the participants up to a possible loss of privacy. In this paper, the issue of protecting privacy while collaborating is discussed. To address the privacy concerns that are raised, a generic privacy ontology is presented, along with aCollaborative Privacy Manager (CPM). The generic privacy ontology allows for the creation of privacy policies at several layers of granularity. The architecture of the CPM, consisting of several levels and modules, is introduced. The functions this CPM will play to ensure privacy in collaborative working environments are also detailed. How each module within the Collaborative Working Environment works towards accomplishing each goal is described.

Keywords-privacy; collaborative working environment; privacy ontology; privacy policy

I. INTRODUCTION

The ability to collaborate has always been vitally important to businesses and enterprises. Pooling together the resources and talents of a group of people is how we can solve complex problems and tasks. Current networking andsoftware technology now allow this collaborative process to extend far beyond the traditional common workplace. Collaborative Working Environments (CWEs) are software applications and platforms that facilitates diverse interactions between users, machines and collaborative services [1].CWEs allow for collaboration between individuals over vast geographical distances, and between individuals of differing enterprises. Within the CWE, groups and sub-groups can be formed in order to tackle specific tasks or to bring together individuals with a common specialization.

In order to be successful and productive, a CWE must meet many functional and non-functional requirements. Functional requirements are problems that define what the CWE will do, such as being able to share a specific file, or being able to support synchronous communication. Non-functional requirements define constraints and qualities upon the CWE. When comparing functional to non-functional requirements, the problems surrounding non-functional

requirements often require more thought and planning to solve, as these issues can be vague and difficult to quantify. This paper focuses on one such non-functional requirement of CWEs, that of providing adequate privacy protection for the collaborating members. One of the unique challenges of providing privacy protection in any environment is that the definition of privacy is not fixed; the concept of what is private varies over time, region, and between different people. What one person considers private information may not be considered private by another. In today's heavily online and networked world, concerns stretch beyond information that can be found, to include information that can be implied or discovered. Privacy is defined in our work as the ability to protect information about oneself that has not been released, as well as the ability to retain some level of control over information that has been released.

Collaborative working environments, as the name implies, allow for the collaboration of work between many individuals. The ability to transfer information between many different individuals and groups is the main strength of a CWE, as it allows for the completion of otherwise complicated and distributed work. However this ability also carries with it many concerns related to privacy. As information is passed between collaborating members within a CWE, issues of privacy quickly become apparent. An individual may wish to share private information with one person or group, and not have that information shared with another person or group. Similarly, the information that is accessible by others must be controlled and monitored.

The first step in protecting an individual's privacy is to allow that individual to clearly state their privacy preferences. This can be done through the creation of a privacy policy that is able to describe privacy rules that range from very specific to very general [2]. This gives the policy owner fine-grained control over their own policy, while allowing an administrator to create policies to cover many individuals.

However, even with a proper privacy policy in place,major issues remain in the attempt to provide privacy. Any privacy system that is developed must be made as user friendly as possible, as systems that are deemed too complicated by a user will often be disregarded or disabled [3]. An informed user will be better able to protect their own information. However, even with the best attempts at education, many users will be left unqualified to make their own privacy decisions for every scenario [4]. The situation

2013 Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises

978-0-7695-5002-2/13 $26.00 © 2013 IEEE

DOI 10.1109/WETICE.2013.23

123

2013 Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises

978-0-7695-5002-2/13 $26.00 © 2013 IEEE

DOI 10.1109/WETICE.2013.23

108

2013 Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises

978-0-7695-5002-2/13 $26.00 © 2013 IEEE

DOI 10.1109/WETICE.2013.23

110

2013 Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises

978-0-7695-5002-2/13 $26.00 © 2013 IEEE

DOI 10.1109/WETICE.2013.23

110

Page 2: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

of being unqualified to make decisions about one's own privacy stems from the complexity of the problem. The definition of privacy is dynamic and subjective, and as a result not every user will interpret the same privacy principles in the same way. Privacy is also complex, as private information that at the time seems safe to share, can be combined with other information or be used at a later date to exploit users in ways that are extremely difficult for alayperson to predict. These problems are amplified in a CWE, which by its very nature is dynamic as new users and groups enter, leave, and change the environment in real time. As such, the definition of a privacy policy must be featured alongside some autonomous element that can provide users with assistance. In this paper, such an element is described,known as a Collaborative Privacy Manager (CPM).

The rest of this paper is divided into sections. Section IIdescribes related work in the field of collaborative privacy and privacy managers. Section III introduces a generic privacy ontology, which is used in the creation of privacy policies. Section IV introduces the architecture of the CPM,detailing the levels and modules that comprise it. Section V describes the functionality the CPM will be able to perform, and how its architecture lends itself to completing these tasks. Finally Section VI contains conclusions and directions for future work.

II. RELATED WORKS

There is not a significant amount of work completed specifically covering privacy in Collaborative Working Environments. Collaboration has been used as a means to provide privacy, and these acts provide useful looks at how the fields of privacy and collaboration can be tied together. Indeed, many of the techniques used for providing privacy through collaboration will be used in creating a privacy solution for collaboration.

While the act of collaborating in a working environment creates its own share of privacy issues, collaboration itself can be used to help solve issues of privacy. A work byAnthonysamy, Rashid, Walkerdine, Greenwood, and Larkou [5] takes on the issue of privacy in online social networks by using collaboration to share privacy configurations among the users of the social network. This approach allows users to make fine-grained control decisions over what private information they are willing to share. The information that is selected is then saved into an access control configuration.These configurations can then be shared to, and rated by, other users in the social network. Finally, other users are able to select from previously rated configurations, selecting those configurations they feel will adequately provide them protection. While not addressing collaborative environments directly, this idea of being able to select from privacy configurations that have been vetted in some manner (in this case, through a rating system) is an important one. This reduces the amount of work required from new users joining the network, and reduces errors for users with minimal privacy experience. This idea is demonstrated in this paper through the CPM, which can analyze previously created privacy policies and make recommendations to new collaborators that require their own policy.

A work by Kolter, Kernchen and Pernul [6] also explores the idea of collaborative privacy management, in this case for users of the World Wide Web. This solution utilizes two main elements to provide privacy protection. The first element is a privacy community, which is tasked with providing feedback, experiences and ratings about the privacy policies of Web service providers. This privacy community acts as the central element of a privacy architecture [6]. The second main element is described as a set of three local privacy components: privacy protection generator, privacy agent, and data disclosure log [6]. The privacy protection generator caters to inexperienced users by allowing for easy to create privacy policies and the selection of predefined Internet service types. The privacy agent component's function is to assist the user in making informed decisions about what private information the website being visited requires, and what information will be disclosed. The third component, the data disclosure log, records what information has been shared in past Web exchanges. In the best case scenario, the data disclosure log would allow a user to access, change or remove information they have previous shared [6]. This approach differs from our work as it is concerned with private information disseminated over the Web. It deals with privacy policies described using P3P [7],which differs from our approach that uses a unique privacy policy and privacy ontology for CWEs. However the ideas presented, making it easy for inexperienced users to create policies, allowing policies to be compared and ranked, and assisting users in making informed decisions, are all important ideas that will be required in the CPM.

In a work by Hong, Mingxuan, and Shen [8], the authors also extend P3P [7] with the goal of representing user privacy preferences for context-aware applications. A markup language is proposed that is suitable for both privacy policies and user preferences. In our work, we do not use P3P directly, but instead base privacy policies on our own privacy ontology. However P3P and our ontology were created based on the same privacy guidelines [9].

III. GENERIC PRIVACY ONTOLOGY

In order to properly define the privacy preferences within a CWE, a generic privacy ontology has been developed. This generic privacy ontology is shown in Figure 1. In our generic privacy ontology, a privacy policy must be created and is identified as the PrivacyPolicy concept. Each PrivacyPolicy is made up of one or more rules, known as PrivacyPolicyRules. Each PrivacyPolicyRule contains the privacy elements: Collector, Retention, Purpose and PrivateInformation. These elements are based on the Fair Information Practices (FIP) developed by the Organisation for Economic Co-operation and Development (OECD) [9].The FIP of the OECD have been selected as the basis for the privacy elements because these guidelines have been used as the model for most of the privacy legislation throughout the world [10]. The Collector element contains a Node, as thiselement describes who or what is collecting the information. This generic privacy ontology allows for the creation of privacy policies as a series of layers. These layers allow for the creation of policies that can cover large groups of entities

124109111111

Page 3: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

Figure 1. Generic privacy ontology.

at the higher layers, to an individual entity at the lower layer. At the lowest layer, there is the Device concept and Node concept. The Node concept is an individual entity within the system that contains personal information. The mostcommon example of a Node is an individual user. The Device concept is an entity belonging to a Node that may contain its own private information, or private information of a Node that is required to be protected. At these layers, privacy policies can be assigned to either the Node or the Device. These policies each cover one specific entity, and allow for the creation of finely tuned privacy rules.

Nodes can be assigned one or more Roles. These Roles allow for the designation of assignments or tasks to a set of Nodes. At this layer, a privacy policy can be created per Role. These policies allow for the protection of information that is common among a group of similar Nodes.

The next layer is a Group. A Group allows for the collection of one or more Roles, in order to accomplish a task. A group may contain certain privacy protection requirements and by creating a Group privacy policy, it can be assured that everyone within the group is meeting these requirements.

Finally there are Organizations. An Organization is a collection of Groups that are governed by the same body. Such an Organization may wish that all of its members meet a set of privacy requirements. To this end, an Organization privacy policy can be used to apply the same privacy policy rules to all members of an Organization.

The reasons for these different layers are the simplification of privacy policy creation for individual entities. The creation of privacy policies is difficult, and in dynamic environments such as CWEs where the environment and roles of an individual changes rapidly, the amount of work required to keep a privacy policy adequate individually could quickly become overwhelming.

When a conflict arises between any two policies, policy conflict resolution will take place. Policy conflict resolution is a large subject, as there are numerous approaches that can be taken. While there are several types of conflicts that can occur, the most common conflict to occur in our work given our multiple privacy policy approach is modality conflicts. Modality conflicts occur when multiple policies indicate opposing actions to the same subject, target and action [11].Due to the size of the topic, this paper will not include an in-depth discussion on policy conflict resolution; however given the sensitivity of PII, denial takes precedence will be used to deal with conflicts. That is, when two policy rules are in conflict, access to that PII will be denied.

IV. COLLABORATIVE PRIVACY MANAGER ARCHITECTURE

In this section the Collaborative Privacy Manager (CPM) will be introduced through a description of its architecture. The architecture will take advantage of a generic privacy ontology shown in Figure 1. Within the environment, administrators will be selected to represent the different layers identified in the generic privacy ontology. For

125110112112

Page 4: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

KnowledgeBase

Enforcement

Environm

ent

PrivacyAnalyzer

PrivacyPlanner

PrivacyExecutor

Anonymization

Privacy Monitor

PrivacyManagement

DomainPrivacyOntology

PrivacyPolicyPlanner

PersonalPrivacyPolicies

GlobalPrivacyPolicies

Generic Privacy Ontology

PrivacyRules(SWRL)

UserInterface

Application

Requirements

Figure 2. Collaborative privacy manager architecture.

example, there would be an administrator for the organization, for each group and role. These administrators are tasked with the job of creating privacy policies that govern their layer. The architecture of the CPM is divided into several distinct levels, and is shown in Figure 2. Each different level of the architecture will be described in the following subsections.

A. User Interface Level The User Interface Level as shown in Figure 2, allows

the collaborators within the CWE to interact with the CPM. Through the user interface, the collaborator will be given information, instruction, and alerts related to their privacy in the environment. The user interface will allow the collaborator to view any privacy policies that currently govern them. The collaborator will also use the user interface to outline their own privacy policy terms. As previously mentioned, the task of outlining one's own privacy policy is challenging due to the complexity of privacy problems. The CPM will be able to recommend privacy policy terms in order to ease this process for users with little or no experience. Privacy administrators who have been assigned to create a global privacy policy for an entire organization, group, or role will be able to enter their preferences through the user interface as well. These administrators will be provided with additional services in the user interface that are not offered to standard users.

B. Privacy Management Level The Privacy Management Level contains the information

and knowledge specific to the CWE. The Global Privacy Policies module will contain all of

the privacy policies that govern more than one individual collaborator. These policies will fall into three categories:

organization, group or role. Organizational policies will cover every collaborator who is working under the same organization. Group policies will cover groups or sub-groups of collaborators that are working on the same task or have the same goal. Role policies will cover collaborators who perform the same job within the environment, such as an engineer. These policies are created by privacy administrators selected to represent each organization, group and role. These administrators will have more experience and training in creating privacy policies than the average user. Each global privacy policy can be created prior to any users entering the CWE.

The Personal Privacy Policies module will contain all of the privacy policies that govern individual collaborators. These policies let each collaborator add their own personal preferences to the protection of their private information, which allows for a more fine-grained level of protection than the global privacy policies can provide. More specific policies can create rules to deny access previously given by more general policies. For example, if a group allows access to all of its members' email addresses, a specific member may use their personal privacy policy to deny access to their own email address. However, more specific policies may not grant access to information that has been denied by a more general policy. The personal privacy policies are created bycollaborators through the user interface as they first enter the CWE. These personal privacy policies can be altered at any time during the collaboration at the wish of their owner.

The Privacy Policy Planner will be in charge of creating policy examples and generating helpful advice for each user. This information will be passed to the user interface where the collaborator will provide input and feedback. In order for this module to create its suggestions, it must communicate with both the global and personal privacy policies. As collaborators enter a CWE, they will have already been assigned to the appropriate organizations, groups and roles. These organizations, groups and roles will each have their own global privacy policies. Access to the Global Privacy Policies module will allow the Privacy Policy Planner module to access what rules any new user is already obligated to follow. These preexisting rules will influence what suggestions the Privacy Policy Planner module will make to the new collaborator. For example, the Privacy Policy Planner module may wish to suggest that all users protect their email address from being shared. However it could be the case that the organization these new collaborating users are assigned to has a rule in their organizational privacy policy stating their employees will not share their email addresses. In this case, the Privacy Policy Planner module will not suggest to the new users to add a 'protect email' rule to their personal privacy policies, as they are already covered by their organizational privacy policy.Access to the personal privacy policies will assist the Privacy Policy Planner module in a different way. By searching through these policies, the Privacy Policy Planner module can detect trends that may have developed between the users in the CWE. These trends will assist the Privacy Policy Planner module in making more intelligent suggestions. For example, if the Privacy Policy Planner determines that a high

126111113113

Page 5: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

percentage of all engineers within a CWE have rules protecting their date of birth, it can suggest this rule to any new engineers that enter the CWE.

The Privacy Management Level also includes aKnowledge Base which contains two modules, the Domain Privacy Ontology and Privacy Rules. The Domain Privacy Ontology is an extension of the previously described Generic Privacy Ontology. The Domain Privacy Ontology is extended with privacy elements specific to the individual CWE being deployed. The Privacy Rules are a translation of both the global and personal privacy policies, that have been translated into a machine readable format, such as the Semantic Web Rule Language (SWRL) [12].

C. Application Requirements Level The Application Requirements Level consists of the

modules that must be directly built into the application the collaborators use to access the CWE. In the case of the CPM, the Application Requirements Level contains a privacy monitor module. This monitoring module will have direct access to the information exchanges occurring as the user collaborates with others. It is able to monitor the incoming and outgoing requests that contain privacy information, and send this information to the Enforcement Environment Levels where autonomous decisions can be made.

D. Enforcement Environment Level The Enforcement Environment Level contains the

decision making processes of the CPM, allowing it to make autonomous decisions without the direct intervention of a human user. Each of the modules within the Enforcement Environment Level will be created as services within the CWE. The Privacy Analyzer is the first module in this decision making process. This module will parse the information that has been passed to it from the Privacy Monitor. The analyzer will distinguish what information pertains to the current user, and compare this to previously recorded information. An example of this could be comparing a failed information request to previous failed information requests.

The Privacy Planner module takes input from the Privacy Analyzer, and has the job of deciding what outcome should result for a given situation. Continuing the example of a failed information request, the Privacy Planner module could decide that since a high number of requests have failed, the user whose information is being requested should be notified. This would make the notified user aware that there is a demand for a certain piece of their information. The user would then be able to make it available if they so desired.

The Privacy Executor is the final step in the decision making process of the Enforcement Environment Level. This module will take the decision that has been made by the Privacy Planner and encode it into the correct format so the decision can be carried out by the appropriate software. Continuing the example from above, the Privacy Executor would send a message to the user interface, alerting the user to the high number of failed information requests.

A separate module also in this level is the Anonymization module. In some outcomes, information will be required to

be anonymized, or masked in some fashion. This module was made separate from the Privacy Executor as this functionality is not required in every situation, and there are a number of different approaches this can be done such as k-anonymity and l-diversity [13]. Each approach to anonymization has its own strengths and weaknesses, so allowing different approaches to be plugged in rather than built in, will provide additional flexibility to the framework. An example of this flexibility is the ability to select different anonymization approaches based on domain specific requirements.

The final aspect of the Enforcement Environment Level is the Generic Privacy Ontology. This ontology is shown in Figure 1. This generic ontology is called upon to create specific domain privacy ontologies for each CWE.

V. COLLABORATIVE PRIVACY MANAGER FUNCTIONS

With the architecture of the CPM outlined, the functions it can perform will now be described. These functions of the CPM will include why they are required, and how they relate to the given architecture.

A. User Privacy Policy Creation Assistance One of the most important functions the CPM will play is

that of assisting a user in the creation of their privacy policy. An important factor in creating a successful privacy protection solution is that the solution itself must have a small overall impact on the users themselves. Users want their privacy to be protected, but they do not want to see the solution in action. Any major inconveniences provided by a privacy solution, even if the inconveniences help protect privacy, will ultimately be ignored or circumvented by a large number of users. As mentioned, users are often unqualified to make every privacy decision themselves [4] when creating a privacy policy. And even if they are well versed in privacy policy creation, the process of outlining their privacy demands cannot be made tedious. In a dynamic situation like a CWE, this problem is exasperated further by the environment's ability to change, through the addition and removal of users, and the creation, deletion, and modification of groups and relationships.

To solve this problem, the CPM will assist the user in the creation of their privacy policy upon entering the CWE. As shown in Figure 1, the generic privacy ontology contains privacy policies at the organization, group, role, node and device layers. Before entering a CWE, new users will have already been assigned by the environment administrators to any organization, group, and/or role that is required to complete their task. Each organization, group and role has their own privacy administrator who will have created a policy to govern their members. When a collaborator enters a CWE, this entrance is detected by the CPM's Privacy Monitor. The CPM is made aware of any organizations, groups and/or roles the new collaborator may be assigned to. The CPM will then take stock of what rules are already governing this new collaborator, based on the relevant organizational, group, and role policies stored in the Global Privacy Policies module. The CPM's Privacy Policy Planner will then conduct an analysis and indicate recommended

127112114114

Page 6: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

privacy rules to the new collaborator through the user interface. This recommendation process is done by the CPM through a number of metrics. For example, the CPM can track what popular rules are currently being used by other users within the environment. If every user contains one specific rule, then this rule is highly likely to concern the new user. Similarly, the CPM can make recommendations through the user interface to the new user based on other collaborating users who share the same organization, group and/or rule. Any rules selected from this recommendation by the new collaborator through the user interface will be added to that individual's node privacy policy, which is stored in the Personal Privacy Policies module.

As users enter and leave the CWE, as projects and groups end or modify, and as other organizational partnerships begin and end, the situation in the CWE also changes. Rather than leaving it up to each collaborator to monitor the situation and make changes when needed, the CPM's Privacy Monitor will track these changes, and will provide the users with information through the user interface informing them of what rules they are free to adopt, and what rules are no longer necessary.

B. Abnormal Conditions Another important function the CPM will play is that of a

monitor for abnormal conditions. The CPM can be set with any number of conditions, which can be decided upon by the environment administrator. The CPM's Privacy Monitor will observe for these conditions, and once any condition has been met, the CPM will inform the required parties and make recommendations. This ability will enhance the usefulness of the privacy protection by increasing the collaborative users'knowledge of what is occurring in the environment. This in turn will allow collaborative users to make informed decisions that will result in an increase of privacy protection, and/or an increase in their usefulness in the system.

An example of a monitoring condition could be the monitoring of total failed requests for the same piece of private information. A threshold could be created as a baseline for when a user should be informed of failed requests for their information, such as 100 failed requests per day. This threshold would be set by the environment's administrator, and would be stored in the Privacy Monitor. This information is important for a user to be aware of. In this case, these requests could be an attack or be malicious in nature, in which case by informing the affected user the appropriate parties can be alerted to the issue by the user and the CPM. There is also the possibility that the user, when designing their privacy policy, was unaware of what information others would find useful. The user may be willing to share the information that is in demand, and make themselves more useful to those making the requests. In this case, the CPM can provide suggestions on how their privacy policy can be adjusted to make this information available to the user through the user interface. If these changes are acceptable by the user, the CPM can make the appropriate changes to their policy stored in the Personal Privacy Policies module. Finally, in this example there is a third possibility that this private information is being correctly

blocked, and the requests are not malicious. In this case, the CPM would notify the affected user through the user interface. Upon being told by the user also through the user interface, that no changes to their policy are required, the CPM could offer the ability to change the conditional threshold for this user if they so desire.

By monitoring for abnormal conditions, the CPM will reduce the workload on individuals. It will increase the privacy protection of the CWE by alerting the users to potential attacks. It will also allow users within the CWE to become more useful and knowledgeable within the environment, by alerting users to situations where demand for previously unshared information is high.

C. Sanitization of Private Information Private information is defined as information that

uniquely identifies an individual. Private information can be a single piece of datum, such as an email address or employee number. Private information can also be a collection of data that together uniquely identifies an individual, such as a name with a birth date. In a CWE however, private information is not the only type of information that will be used, discussed, and shared. As CWEs allow for the creation of organizations, groups, and subgroups, there will often be information that pertains to one of these entities. This information is not private per se, and is often work or task related. As such, this information would not regularly be covered under the privacy policy, or be of concern to the CPM. An example of this type of information could be the design model of an airplane wing which is worked on by an engineering group within an aerospace company. However, this information often contains private information. Using our example, the model of the airplane wing could contain information on the engineer who designed it. When this information is shared to another organization or group, this private information may need to be sanitized. This is a task that the CPM would be concerned with, and would have to be able to accomplish. The sanitization can be done in several ways, such as the removal of the data, or the anonymization of the data.

In the case where group or organizational data is being transferred, the CPM Privacy Monitor will be consulted to check for any private data. If required, the CPM will ensure any detected private data is dealt with appropriately. In this case, the Anonymization module would be called upon to make the required changes to the private information. This module contains different approaches for the masking of information and can be tailored to meet the preferences of the environment administrator.

D. Retrieving Private Information from a Device In the CWE scenarios this paper is designed to handle,

the collaborators can consist of a mixture of human users(generically known as nodes) and devices. These devices are autonomous, and collect information and perform tasks on their own without requiring direct human intervention. The owner of the device will be tasked with the creation of the device's privacy policy. And similar to the assistance the CPM gives when creating a user's privacy policy, the CPM

128113115115

Page 7: [IEEE 2013 IEEE 22nd International Workshop On Enabling Technologies: Infrastructure For Collaborative Enterprises (WETICE) - Hammamet, Tunisia (2013.06.17-2013.06.20)] 2013 Workshops

will provide assistance and recommendations when creating a device privacy policy. This assistance is made by comparisons to other similar device privacy policies, which are also stored in the Personal Privacy Policy module. If a device is shared by more than one user, they will both have the ability to edit the device's privacy policy.

The information collected by these devices will largely not be private information. But similar to the situation of dealing with organizational or group data, there is the possibility the device information will contain private information. In this case, the CPM will be required to check the device information along with the device's privacy policy and make any necessary removals or anonymizations. Again, this process of removal or anonymization will be handled by the CPM's Anonymization module. The CPM will also be tasked with informing the owner through the user interface of any privacy related information they so desire. For example, a device owner may wish to be informed whenever anyone accesses private information from their device, or they may wish to be notified if any repeated or suspicious access attempts are made. These preferences of when the owner wishes to be notified will be specified by the owner when they are creating the device privacy policy.

VI. CONCLUSIONS AND FUTURE WORK

As collaborative working environments grow in popularity and ability, the need to provide privacy protection becomes paramount. Collaborators must be ensured that they are not sacrificing their private information in order to take advantage of the abilities of a CWE. With a large number of people exchanging large amounts of information, without properly defined and protected privacy, private information could quickly disseminate and become uncontrollable. In this paper, we provide two main contributions to help solve the problem of privacy in Collaborative Working Environments.

The first contribution of this paper is the introduction and definition of a generic privacy ontology. This ontology is domain independent, which allows the ontology to be extended to meet any specific domain requirements. We define a privacy policy that consists of one-to-many privacy rules. These rules consist of four privacy elements, which are based on successful privacy legislation, and allow for the clear outlining of how private information will be used.

The second contribution of this paper is the introduction of a Collaborative Privacy Manager as a solution for providing privacy protection in collaborative work environments. The architecture of the CPM was shown, demonstrating the levels and modules required to produce the desired results. Each module was introduced, with their functions, tasks and relations to each other defined. Following the architecture, the main functions of the CPM were outlined. These functions were described to outline the usefulness of the CPM, and to provide examples of the CPM in action. In these described functions, the roles played bythe CPM's internal modules were described.

For future work, further research will be conducted into policy conflict resolution. While denial takes precedence provides an adequate solution, other approaches such as domain nesting [11] will be examined to determine if they

provide better results. Other future work includes developing the CPM and implementing it in a working CWE. This work will be implemented in a real world scenario, as part of the IMAGINE project [14]. This project will provide a platform for experimentation, where the performance and scalability of the CPM will be demonstrated and recorded. The CPM will be developed alongside a collaborative management and an authorization management system. Together these systems will provide a comprehensive solution for allowing secure collaborative work.

REFERENCES

[1] M. Martínez-Carreras, A. Ruiz-Martínez, A. Gómez-Skarmeta, W. Prinz, "Designing a Generic Collaborative Working Environment," Proc. IEEE International Conference on Web Services 2007, IEEE, Jul. 9-13, 2007, pp. 1080-1087.

[2] D. Allison, M. Capretz, H. El Yamany, S. Wang, "Privacy Protection Framework with Defined Policies for Service-Oriented Architecture," Journal of Software Engineering and Applications, vol. 5, pp. 200-215, 2012.

[3] C. Hayes and J. Kesan, "Making modest moves: individual users and privacy in the cloud," Social Science Research Network, Apr. 1, 2012, [Online] Available: http://ssrn.com/abstract=2032653 [Accessed: Apr. 5, 2013].

[4] R. Dodge, C. Carver, and A. Ferguson, "Phishing for User Security Awareness," Computers & Security, vol. 26, iss. 1, 2007, pp. 73-80.

[5] P. Anthonysamy, A. Rashid, J. Walkerdine, P. Greenwood, and G. Larkou, "Collaborative privacy management for third-party applications in online social networks," Proc. 1st Workshop on Privacy and Security in Online Social Media, ACM, Apr. 17, 2012.

[6] J. Kolter, T. Kernchen, and G. Pernul, "Collaborative Privacy Management," Computers and Security, vol. 29, iss. 5, Jul. 2010, pp. 580-591.

[7] L. Cranor, M. Langheinrich, M. Marchiori, M. Presler-Marshall, and J. Reagle, "The Platform for Privacy Preferences 1.0 Specification," W3C, Apr. 16 2002. [Online] Available: http://www.w3.org/TR/P3P. [Accessed: Apr. 5, 2013].

[8] D. Hong, Y. Mingxuan, and V. Shen, "Dynamic privacy management: a plug-in service for the middleware in pervasive computing," Proc. 7th International Conference on Human Computer Interaction with Mobile Devices & Services, ACM, 2005, pp. 1-8.

[9] Organisation for Economic Co-operation and Development, "OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data," 1980. [Online] Available: http://www.oecd.org/document/18/0,3343,en_2649_34255_1815186_1_1_1_1,00.html. [Accessed: Apr. 5, 2013].

[10] A. Cavoukian, T. Hamilton, "The Privacy Payoff: How Successful Businesses Build Customer Trust," McGraw-Hill Ryerson Limited, Whitby, Ontario, Canada, 2002.

[11] G. Russello, C. Dong, N. Dulay, "Authorisation and Conflict Resolution for Hierarchical Domains," Proc. Eighth IEEE International Workshop on Policies for Distributed Systems and Networks, IEEE, Jun. 13-15, 2007, pp. 201-210.

[12] I. Horrocks, P. Patel-Schneider, H. Boley, S. Tabet, B. Grosof, and M. Dean, "SWRL: A Semantic Web Rule Language Combining OWL and RuleML," W3C, May 21 2004. [Online] Available: http://www.w3.org/Submission/SWRL. [Accessed: Apr. 5, 2013].

[13] J. Wang, Y. Luo, S. Jiang, and J. Le, "A Survey on Anonymity-Based Privacy Preserving," Proc. 2009 International Conference on E-Business and Information System Security, IEEE, May 23-24, 2009.

[14] IMAGINE Project, "IMAGINE: Innovative end-to-end Management of Dynamic Manufacturing Networks," 2012. [Online] Available: http://www.imagine-futurefactory.eu/index.dlg. [Accessed: Apr. 5,2013].

129114116116