79
Copyright © WCST-2014 Published by Infonomics Society ISBN: 978-1-908320-43-8 Edited By Charles A. Shoniregun Galyna A. Akmayeva World Congress on Sustainable Technologies (WCST-2014) Technical Co-sponsored by IEEE UK/RI Computer Chapter December 8-10, 2014, London, UK www.wcst.org WCST-2014 Proceedings Contents Page Keynote Speakers Executive Committees PhD / Doctorate Consortium Sessions

€¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Copyright © WCST-2014 Published by Infonomics Society ISBN: 978-1-908320-43-8

Edited By

Charles A. Shoniregun

Galyna A. Akmayeva

World Congress on Sustainable Technologies (WCST-2014)

Technical Co-sponsored by IEEE UK/RI Computer Chapter

December 8-10, 2014, London, UK

www.wcst.org

WCST-2014 Proceedings

Contents Page Keynote Speakers Executive Committees

PhD / Doctorate Consortium Sessions

Page 2: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014, WorldCIS-2014 and WCST-2014

December 8-10, 2014, London, UK

Heathrow Windsor Marriott Hotel

Ditton Road, Langley

Berkshire

SL3 8PT

Tel: +44 (0)1753 598 181

Fax: +44 (0)1753 598 157

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 2

Page 3: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Message from the Steering Committee Chair Welcome to the International Conference for Internet Technology and Secured Transactions (ICITST-2014), World

Congress on Internet Security (WorldCIS-2014) and World Congress on Sustainable Technologies (WCST-2014)

The ICITST-2014, WorldCIS-2014 and WCST-2014 are collocated conferences that provide opportunity for

academicians and professionals to bridge the knowledge gap and to promote research esteem.

The ICITST-2014 received 1446 papers from 93 countries of which 413 papers were accepted after the first review

and 107 papers were finally accepted for presentation and 5 Workshops. The WorldCIS-2014 received 235 papers

from 39 countries of which 53 papers were accepted after the first review and 31 papers were finally accepted for

presentation. While WCST-2014 received 275 papers from 32 countries of which 53 were accepted after the first

review and 20 papers were finally accepted for presentation. The double blind paper evaluation method was

adopted to evaluate each of the conferences submissions. Please note that selected papers will be invited for

publications in high impact International Journals.

Many people have worked very hard to make this conference possible. I would like to thank all who have helped in

making ICITST-2014, WorldCIS-2014 and WCST-2014 a success. The Steering Committee and reviewers each

deserve credit for their excellent job. I thank the authors who have contributed to each of the conferences and all

our Keynote Speakers: Professor John Barrett, Professor Steven Furnell, Professor Frank Wang, Dr Tyson Brooks,

Dr George Ghinea and Professor René Lozi for agreeing to participate in ICITST-2014, WorldCIS-2014 and WCST-

2014. I will also like to acknowledge my appreciation to the following organisations for their sponsorship and

support: IEEE UK/RI Computer Chapter, Infonomics Society, Canadian Teacher Magazine and National

Association for Adults with Special Learning Needs (NAASLN). It has been great pleasure to serve as the Steering

Committee Chair for the three conferences. The long term goal of ICITST-2014, WorldCIS-2014 and WCST-2014 is

to build a reputation and respectable conference for the international community.

On behalf of the ICITST-2014, WorldCIS-2014 and WCST-2014 Executive members, I would like to encourage you

to contribute to the future of ICITST, WorldCIS and WCST as authors, speakers, panellists, and volunteer

conference organisers. I wish you a pleasant stay in London, and please feel free to exchange ideas with other

colleagues.

Professor Charles A. Shoniregun

ICITST-2014, WorldCIS-2014 and WCST-2014

Steering Committee Chair

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 3

Page 4: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Contents Page

Welcome Message 3

Contents Page 4

Executive Committee 6

Technical Programme Committees 6

Keynote Speakers 8

Keynote Speaker 1: Professor John Barrett 9

Keynote Speaker 2: Professor Stevent Furnell 11

Keynote Speaker 3: Professor Frank Wang 12

Keynote Speaker 4: Dr Tyson Brooks 13

Keynote Speaker 5: Dr George Ghinea 14

Keynote Speaker 6: Professor René Lozi 15

PhD / Doctorate Consortium Organiser: Professor Charles A. Shoniregun

17

Sessions 18

Session 1: Sustainability and Waste Management 19

Design Implications for a Community-based Social Recipe System (Authors: Veranika Lim, Fulya Yalvac, Mathias Funk, Jun Hu, Matthias Rauterberg, Carlo Regazzoni, Lucio Marcenaro)

20

Rainfall-Runoff relationship for streamflow discharge forecasting by ANN modeling (Authors: Sirilak Areerachakul, Prem Junsawang)

28

Performance of Granular Activatedcarbon comparing with Activated Carbon (bagasse) Biofiltration in Wastewater treatment (Author: Nathaporn Areerachakul)

32

Session 2: Sustainable Energy Technologies, Carbon and Emission 36

The integrated permitting system and environmental management: a cross analysis of the landfill sector in Mediterranean regions (Authors: Maria Rosa De Giacomo, Tiberio Daddi)

37

Carbon Dioxide Mitigation Strategies in Power Generation Sector: Singapore (Authors: Hassan Ali, Steven Weller)

43

Studies of isothermal swirling flows with different RANS models in unconfined burner (Authors: A.R. Norwazan, M.N. Mohd Jaafar)

49

Page 5: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Challenging Instruments and Capacities to Engage in Sustainable Development (Author: Carlos Germano Ferreira Costa)

55

Session 3: Operation, Optimization and Servicing 58

Password Security Enhancement by Characteristics of Flick Input with Double Stage C.V. Filtering (Author: Nozomi Takeuchi, Ryuya Uda)

59

Performance Evaluation of Cloud E-Marketplaces using Non Preemptive Queuing Model (Authors: A.O. Akingbesote, M.O Adigun, S.S Xulu, E. Jembere)

67

Comparative Analysis of Sparse Signal Recovery Algorithms based on Minimization Norms (Authors: Hassaan Haider, Jawad Ali Shah, Usman Ali)

73

Page 6: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

WCST-2014 Executive Committees

General Chair

Frank Zhigang Wang, University of Kent, UK

Steering Committee Chair

Charles A. Shoniregun, Infonomics Society, UK and Ireland

Steering Committees

Emmanuel Hooper, Harvard University, USA Frank Zhigang Wang, University of Kent, UK Ion Tutanescu, University of Pitesti, Romania

Liang-Jie (LJ) Zhang, Kingdee International Software Group, China Paul Hofmann, Saffron Technology, USA

Nick Savage, University of Portsmouth, UK

Publication Chair

Galyna Akmayeva, Dublin Institute of Technology, Ireland

PhD Student Forum Chair

Robert M. Foster, University of Wolverhampton, UK

Technical Programme Committee Chair

Roberto Pereira, University of Campinas (UNICAMP), Brazil

Technical Programme Committees

Israel Koren, University of Massachusetts, USA Hicham Adjali, Kingston University, UK

Javier Alonso, Technical University of Catalonia, Spain Mani Krishna, University of Massachussets, USA

Narimantas Zdankus, Kaunas University of Technology, Lithuania Roderick Lawrence, University of Geneva, Switzerland

Andrew Geens, University of Glamorgan, UK Alan Brent, Stellenbosch University, South Africa

Adel Gastli, Sultan Qaboos University, Oman Hakan Aydin, George Mason University, USA

Motamed Ektesabi, Swinburne University of Technology, Australia Jean-Michel Lavoie, Université de Sherbrooke, Canada

Amip Shah, Hewlett-Packard Company, USA Jamal Zemerly, KUSTAR, UAE

Safwan El Assad, Polytech'Nantes, France Chan Yeob Yeun, KUSTAR, UAE

Princely Ifinedo, Cape Breton University, Canada Charles k. Ayo, Covenant University, Nigeria

Zhixiong Chen, Mercy College, USA Youakim Badr, INSA de Lyon, France

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 6

Page 7: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Richard Chbeir, Université de Bourgogne, France Sead Muftic, KTH - Royal Institute of Technology, Sweden

Vyacheslav Grebenyuk, Kharkiv National University of Radioelectronics (KNURE), Ukraine Victoria Repka, The People's Access Education Initiative, Australia

Daniel Oberle, SAP Research CEC, Germany Daniel Mosse, University of Pittsburgh, USA

Vania Paula de Almeida Neris, Federal University of Sao Carlos – UFSCar, Brazil

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 7

Page 8: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Keynote Speakers

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 8

Page 9: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Keynote Speakers

WCST-2014: Keynote Speaker 1

John Barrett is a Professor in Ecological Economics at the Sustainability Research Institute

(SRI), University of Leeds. John's research interests include sustainable consumption and

production (SCP) modelling, carbon accounting and exploring the transition to a low carbon

pathway. John has an extensive knowledge of the use of Multi-Regional Environmental Input-

Output modelling to understand the effectiveness of strategies and policies to deliver a low

carbon economy. These key areas of research have involved the building of global trade models

to understand the embodied carbon emissions in goods and services and estimating the

upstream carbon emissions from emerging energy technologies. John leads one of the research

themes for UK Energy Research Centre on “Energy, Economy and Society” that examines

interactions between the UK energy system and the UK economy and examines the potential

implications on policies, markets and prices and affordability. As well as his role in UKERC, John

is also the co-director of the UK INDEMAND Centre. The centre, funded jointly by the UK’s

Energy Programme, Government departments and industry partners, considers how changing

our use of materials and products can deliver substantial energy demand reduction in the UK.

John has advised or supported a number of government departments including DECC, Defra

and the Committee on Climate Change. John provides the UK Government with one of their

headline indicators, the Consumption-based GHG emissions of the UK. John was one of the

lead advisors to Defra in relation to the development of PAS2050. John was a lead author for the

IPCC 5th Assessment for Working Group III. John has appeared regularly on Radio 4 news

and discussion programmes and written numerous academic papers and policy reports on

economy / energy / environment issues. John is also a member of Climate Strategies, a not-

for-profit organisation that provides world-class, independent policy and economic research

input to European and international climate policy.

Title: The role of technology and energy demand reduction in climate policy

Abstract: The recent publication by the Intergovernmental Panel on Climate Change (IPCC) demonstrates the

need for rapid and deep cuts in Greenhouse Gas emissions to have a reasonable probability of limiting global

temperature increases to two degrees. The report is clear: we need a rapid role out of low carbon technologies and

significant reductions in energy demand. However, many government strategies for addressing climate change rely

predominately on technological solutions and have ignored the full potential of energy efficiency measures to

achieve an absolute reduction in energy.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 9

Page 10: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

The presentation will consider the need to balance rapid technological progress in low carbon technologies with

changing consumption patterns. The presentation will cover the role of trade, resource efficiency measures and the

economy in climate change.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 10

Page 11: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014 and WorldCIS-2014: Keynote Speaker 2

Professor Steven Furnell is the head of the Centre for Security, Communications and Network

Research at Plymouth University in the United Kingdom, and an Adjunct Professor with Edith

Cowan University in Western Australia. His interests include security management and culture,

computer crime, user authentication, and security usability. Prof. Furnell is active within three

working groups of the International Federation for Information Processing (IFIP) - namely

Information Security Management, Information Security Education, and Human Aspects of

Information Security and Assurance. He is the author of over 240 papers in refereed international

journals and conference proceedings, as well as books including Cybercrime: Vandalizing the

Information Society (2001) and Computer Insecurity: Risking the System (2005). He is also the

editor-in-chief of Information Management and Computer Security, and the co-chair of the

Human Aspects of Information Security and Assurance (HAISA) symposium. Steve is active in a

variety of professional bodies, and is a Fellow of the BCS, a Senior Member of the IEEE, and a

full member of the Institute of Information Security Professionals.

Title: Mobile Security: The Challenge of Liberation

Abstract: With mobile devices now an integral part of both our personal and business lives, we are routinely

carrying around valuable assets that need protection. This applies to both the devices and, more significantly, the

data they commonly hold and the further access they can facilitate. This heightens the need for long-standing

safeguards such as authentication, alongside the need for controls such as malware protection that have not

traditionally been required outside the realm of desktop and laptop systems. In addition, for organisations, the

mobile dimension also brings significant technical and policy considerations in terms of whether staff bring, choose,

or are assigned a device. With these points in mind, Steven Furnell examines the mobile security landscape,

considering the safeguards and ongoing challenges that need to be recognised.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 11

Page 12: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014, WorldCIS-2014 and WCST-2014: Keynote Speaker 3

Frank Z. Wang is the Professor in Future Computing and Head of School of Computing,

University of Kent, UK. The School of Computing was formally opened by Her Majesty the

Queen. Professor Wang's research interests include cloud/grid computing, green computing,

brain computing and future computing. He has been invited to deliver keynote speeches and

invited talks to report his research worldwide, for example at Princeton University, Carnegie

Mellon University, CERN, Hong Kong University of Sci. & Tech., Tsinghua University

(Taiwan), Jawaharlal Nehru University, Aristotle University, and University of Johannesburg.

In 2004, he was appointed as Chair & Professor, Director of Centre for Grid Computing at

CCHPCF (Cambridge-Cranfield High Performance Computing Facility). CCHPCF is a

collaborative research facility in the Universities of Cambridge and Cranfield (with an

investment size of £40 million). Prof Wang and his team have won an ACM/IEEE Super

Computing finalist award. Prof Wang is Chairman (UK & Republic of Ireland Chapter) of the

IEEE Computer Society and Fellow of British Computer Society. He has served the Irish

Government High End Computing Panel for Science Foundation Ireland (SFI) and the UK

Government EPSRC e-Science Panel.

Title: How will computers evolve over the next 10 years?

Abstract: Computer science has impact on many parts of our lives. Computer scientists craft the technologies that

enable the digital devices we use every day and computing will be at the heart of future revolutions in business,

science, and society. Our research targets the next generation computing paradigms and their applications. We

have been working on Cloud Computing, Grid Computing & Internet II for many years. A developed Cloud/Grid

Computing platform conforms to the Internet standard and can universally accelerate Big Data / Web / Media

applications by a factor up to ten. This work won an ACM/IEEE Super Computing finalist award. We will also report

our research on Big Data, Green Computing, Brain Computing and Future Computing.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 12

Page 13: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014 and WorldCIS-2014: Keynote Speaker 4

Dr Tyson Brooks works for the US Department of Defense (DoD) and is a Co-Director for the

Wireless Grid Testbed (WiGiT) at Syracuse University. He has more than 20 years of

professional experience in the design, development and production of complex information

systems/architectures, as well as leading the effort to develop secure information systems for the

US DoD and private sector organizations. His research interests are in the fields of cyber-

security, information assurance, information security architecture and internet of things

architectures. Dr. Brooks received a doctorate in Information Management from Syracuse

University and holds a master's degree in Information and Telecommunications Systems from

Johns Hopkins University, a master's degree in Business Administration from Thomas More

College and a bachelor's degree in Business Administration/Management from Kentucky State

University.

Title: The Internet of Things: Connected Reality

Abstract: The Internet of Things (IoT) will create a huge network of billions of smart 'Things/Objects'

communicating each other. In the IoT, smart things/objects are expected to become active participants in business,

information and social processes. The IoT will be able to interact and communicate by exchanging data and

information sensed about the environment and by reacting autonomously to the real/physical events with or without

direct human intervention. The IoT has the purpose of providing an information technology (IT) infrastructure

facilitating the exchanges of things in a secure and reliable manner. This keynote will discuss the anatomy of the

IoT environment and its use within the home, business and our everyday lives.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 13

Page 14: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014 and WorldCIS-2014: Keynote Speaker 5

Dr George Ghinea is a Reader in Computer Science at the School of Information Systems,

Computing and Mathematics at Brunel University in the United Kingdom. He carry out his

research within the People and Interactivity Research Centre (PANDI). His research activities

lie at the confluence of Computer Science, Media and Psychology. In particular, his work

focuses on the area of perceptual multimedia quality and the end-to-end communication

systems incorporating user perceptual requirements. His area of expertise involves eye-

tracking, telemedicine, multi-modal interaction, ubiquitous and mobile computing. He has over

150 publications and co-edited the book on Digital Multimedia Perception and Design. He

consults regularly for both public and private institutions within his research area.

Title: Advances in MulSeMedia = Multiple Sensorial Media

Abstract: Traditionally, multimedia applications have primarily engaged two of the human senses? the audio and

the visual? out of the five possible. With recent advances in computational technology, it is now possible to talk of

applications that engage the other three senses, as well: tactile, olfaction, and gustatory. This integration leads to a

paradigm shift away from the old multimedia towards the new mulsemedia: multiple sensorial media. In his talk,

The Keynote will focus on the issue of the perceptual experience of multimedia and how research in the area has

opened new and sometimes challenging opportunities for mulsemedia applications.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 14

Page 15: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014 and WorldCIS-2014: Keynote Speaker 6

Professor René Lozi is full Professor of Exceptional Class at Laboratory J.A. Dieudonné,

University of Nice, France. In 1977 he entered the domain of dynamical systems, in which he

discovered a particular mapping of the plane producing a very simple strange attractor (now

known as the "Lozi map"). Nowadays his research areas include dynamical systems, complexity

and emergences theories, as well as cryptography based chaos and control of chaos He has

been Visiting Professor for several short periods to the University of Kyoto and Tokushima in

Japan; University of Berkley, USA; Hangzhou Dianzi university, China and Hong-Kong City

University. He is member of the Editorial Board of Indian J. of Indust. and Appl. Maths and J.of

Nonlinear Systems and Appl., and member of the Honorary Editorial Board of Intern. J. of

Bifurcation and Chaos. He is also involved in Memristor theory. He is working in all those fields

with renowned researchers. He received his Ph.D. from the University of Nice in 1975 and the

French State Thesis (Habilitation) under the supervision of Prof. René Thom in 1983. He has

served as the Director of IUFM (Universitary Institute for Teacher Trainees) (2001-2006) and as

Vice-Chairman of the French Board of Directors of IUFM (2004-2006) and several other

executive positions.

Title: The promising future of chaos theory for Personal Cryptographic Security

Abstract: The first example of the use of chaos for cryptographic purpose goes back to the early 90’ when Pecora

and Carroll found how to synchronize chaotic systems. A first reported experimental secure communication system

via chaotic synchronization was built two years after, using Chua’s circuit and was soon improved reducing the

noise of the transmitted signal.Since this pioneer works, the possibility for self-synchronization of chaotic oscillation

has sparked an avalanche of researches on application of chaos in cryptography.

Nowadays, twenty-five years after the beginning of chaotic cryptography this research field continues to be

active, as shown by the large number of papers being published and it is thriving in form of new and interesting

proposals in all areas of modern cryptology. Some patents have been also taken out. However in spite of the

momentum given by these researches, chaos-based cryptography does not yet gain advantage against traditional

techniques like AES or RSA because most of the authors are still using chaotic mappings initially discovered long

time ago. Nonetheless several improvements have been recently done in chaos theory, allowing to master

completely the use of chaos in various industrial projects.

Henceforth, it seems that the spitting point were chaos-based cryptography surpasses traditional techniques will

be reached in the not too distant future. It is why we think that the use of chaos theory for Personal Cryptography

Security (PCS) has a promising future. In this seminar we discuss about the fascinating perspectives of this

research field.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 15

Page 16: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

ICITST-2014, WorldCIS-2014 and WCST-2014: PhD/Doctorate Consortium

Charles A. Shoniregun is a Professor of Applied Internet Security and Information Systems,

Founder of Infonomics Society. He is an invited speaker to NATO, guest speaker to many

universities in the UK and abroad on issues relating to his research and consultancy area, and

have several times won the IEEE Certificate of Appreciation. In 2008, he was invited speaker to

the Joint C2 Capabilities Conference organised for the senior military and US government

personnel in Washington DC. His research interests are in the fields of Internet security, Cyber

Terrorism, risks assessment of technology-enabled information, electronic and mobile commerce

(emC), second-life applications, third-stream activities, telecommunications and applied

information systems. He is a committee member of the Harvard Research Consortium and

Global Seminars (Harvard University), Editor-in-Chief of Eight International Journals, Author, Co-

author, Adjunct and Distinguished Professor in “Applied Internet Security and Information

Systems”, External Assessor to many Universities, Consultant to private and public sectors.

Title: Writing a Sustainable Research Paper

Abstract: The idea of writing a sustainable research paper or developing a topic of research interest that can lead

to a PhD / Doctorate degree or proposal is always an endless thinking of where, when, why, what and who.

Therefore, becoming an experienced researcher and writer in any field or discipline takes a great deal of practice.

This Keynote Lecture will highlights the possible solutions in response to the lack of competence demonstrated by

young researchers and PhD / Doctorate students, and the understanding of what contributes to knowledge gap.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 16

Page 17: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

PhD and Doctorate Consortium The idea of writing a research paper or developing a topic of research interest that can lead to a PhD / Doctorate degree or proposal is always an endless thinking of where, when, why, what and who. Therefore, becoming an experienced researcher and writer in any field or discipline takes a great deal of practice. The Consortium has the following objectives:

Provide a supportive setting for feedback on current research that will stimulate exchange of ideas;

Guide on the future research directions; Promote the development of a supportive community of scholars and a spirit of collaborative

research; Contribute to the conference goals through interaction with other researchers and conference

events.

The PhD and Doctorate Consortium highlights possible solutions in response to the lack of competence demonstrated by young researchers and PhD and Doctorate students, and the understanding of what contributes to knowledge gap.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 17

Page 18: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Sessions

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 18

Page 19: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Session 1: Sustainability and Waste Management

Design Implications for a Community-based Social Recipe System (Authors: Veranika Lim, Fulya Yalvac, Mathias Funk, Jun Hu, Matthias Rauterberg, Carlo Regazzoni, Lucio Marcenaro)

Rainfall-Runoff relationship for streamflow discharge forecasting by ANN modeling (Authors: Sirilak Areerachakul, Prem Junsawang)

Performance of Granular Activatedcarbon comparing with Activated Carbon (bagasse) Biofiltration in Wastewater treatment (Author: Nathaporn Areerachakul)

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 19

Page 20: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Design Implications for aCommunity-based Social Recipe System

Veranika Lim, Fulya Yalvac,Mathias Funk, Jun Hu and

Matthias RauterbergDesigned Intelligence Group

Department of Industrial DesignEindhoven University of Technology

The Netherlands

Carlo Regazzoniand Lucio MarcenaroDepartment of Electrical

Electronic, TelecommunicationsEngineering and Naval Architecture

University of GenoaItaly

Abstract—We introduced the concept of a community-basedsocial recipe system which suggests recipes to groups of usersbased on available ingredients from these users (i.e. who canbe from the same household or different households). In thispaper we discuss the relevance and desirability of such asystem and how it should be designed based on user studies.We identified the relevance of targeting ingredients and foundpositive expected experiences with the system such as to preventhabitual waste-related behavior, awareness of in-home foodavailability, creativity in cooking, moments for surprises andspontaneity, coordination among a group of friends, educationand connectedness. Possible reasons of not using the systemare trust and the inconvenience of distance among users in agroup that are suggested with a social recipe. From our findings,we specify design implications for the system and optimizationfunctions aiming at the prevention of food waste at a collectivelevel.

I. INTRODUCTION

Food waste is a complex global issue with impacts on theenvironment and food security. In developed countries, roughlyhalf of the total avoidable losses within the food chain is gen-erated by consumers [1] which has resulted into prospects forredirecting consumer consumption patterns towards sustain-able practices to reduce environmental impacts [2]. Preventingor reducing food waste generated by consumers, however, isconsidered a major challenge as many factors are involved.These factors are, for example, knowledge [3][4], skills andplanning with regard to preparation and cooking practices[5][6]. Other factors, such as memory, attitude [5][6] andgeneral beliefs together with education and political affiliation,were also found to be stable predictors of overall environ-mental concern [7]. Having busy lifestyles, social relationsand the unpredictability of events are other important factors[5][6]. Moreover, our everyday behaviors around food havebecome less conscious and decisions resulting in food wasteare often implicit, indirectly linked or hidden [5][6]. Therefore,it is important to raise awareness of food waste patterns anddesign intelligent solutions that are embedded and acceptedin our daily lives that motivate people to reduce and avoidwasteful behaviors.

In the field of Human Computer Interaction (HCI), recentresearch suggests exploring the roles of collectivism andcommunity for food sharing practices as a way to reduce foodwaste [8][9]. Related to these findings, we presented Euphoria,a project working towards the design of a community-basedsocial recipe system [10]. In this concept, ingredients availablefrom different households are combined into one or morerecipes, which are suggested to a group of users with the mainaim at collective food waste prevention through collaborationand food sharing. Apart from its altruistic aim, this approachincentivizes people to share, cook and enjoy food together.In this study we explore the relevance and desirability fromthe user perspective, contributing to the design of the systemand its food waste prevention potential. Including user studiesearly in the design process is expected to result in morerelevant specifications of the behavior of the system andhence, increases the likeliness of acceptance in daily lives. Theobjective of this study is threefold: first, to identify amountsand types of food waste as well as the reason of wastagewhich would provide a basis of the proposed system. Second,to explore users’ expected experiences of a community-basedsocial recipe system. Finally, to relate findings with designimplications for the behavior of the system in optimizing foodwaste by means of recipe suggestions.

II. RELATED WORK

In HCI, persuasive sustainability research is increasingin popularity. It has, however, mainly focused on issuessuch as energy consumption, water consumption or greentransportation with the aim to increase awareness [11]. Eco-feedback is an example of a strategy to increase awarenessof resource use and encourage conservation by automaticallysensing people’s activities and feeding related informationback through computerized means [12]. It aims at fosteringpositive attitudes towards sustainable practices aiming at con-servation [13]. Some examples of Eco-feedback displays aredescribed in [14] and [15]. Eco-feedback, however, does notnecessarily direct behavior change explicitly. With our system,we are interested in the possibilities beyond attitude change(i.e. behavior change). Maitland et al. [16] suggest that for

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 20

Page 21: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

persuasive technology to be successful, it should be designedto encourage action. Systems designed for action was arguedto have impacts on creativity, pleasure and nostalgia, gifting,connectedness and trend-seeking behaviors [16]. Encouragingcollective action is a major characteristic of Euphoria [10].

Next to designing for action, social influence strategieshave also been found to have high potential as a means forgenerating positive behavior change [17]. Social mechanismsthat humans use to influence others, such as social approval,peer pressure, norm activation or social comparison, are prin-ciples that can be applied successfully for supporting behaviorchange [18]. The use of social influence strategies is anothermajor characteristic of Euphoria [10]. Studies have shownthat our social environments are important determinants offood waste related behavior [5][6], stemming from culturalpractices, signaling social status, but also emerging behaviorin an age of abundant choice and quantity of food. Thesocial context, therefore, plays a major role in shaping ourindividual decision-making processes, specifically in the areaof food waste. This highlights the importance of addressingthe collective as a target for behavior change as suggestedin [9]. In fact, in previous findings, social activity was foundto be a determinant of food waste [6]. With a community-based social recipe system, we aim at using social activityto discourage food waste, which is in accordance with thecelebratory technology described in [19]. Several examples oftechnologies exist in the area of human-food interaction thatcelebrates the positive relationships people have with food intheir everyday social lives. One such example is Foodmunity[20], a platform for the community through which memberscan share personal experiences about meals. The main aimof the platform is to share these experiences with others asa basis for exposing people to the new and the unknown.Another example is a food recipe system called Kalas [21].This system, which includes aggregated trails of user actions,provides different means of communication between users.EatWell [22] is a system for sharing nutrition-related memoriestargeting low-income communities. The system allows peopleto use their cell phones to create voice memories describinghow they have tried to eat healthfully in their neighborhoods(e.g., at local restaurants) and listen to the memories that othershave created. Barden et al. [23] designed a technology platformthat supports remote guests in experiencing togetherness andplayfulness within the practices of a traditional dinner party.Furthermore, in [24], a menu-planning support system ispresented to facilitate interaction and communication amongneighbors. Their system allows users to manually select theirpreferences of food and neighbors. This information is laterused to propose dishes consisting of shared ingredients ownedby a number of individuals.

Although these projects study food-related practices on acollective level, they do not explore sustainable food-relateddecision-making specifically such as the influence on foodwaste. Currently, we are only aware of the work described

in [25], where sustainable food-related decision-making wasexplored to understand issues of sharing and the use ofsocial networking in an activist food sharing community. WithEuphoria we are interested in how the concept of social recipesinfluence social dynamics around food related practices and itsadvantages on food waste, an important topic for sustainability.

III. EUPHORIA

Euphoria (Efficient food Use and food waste Preventionin Households through Increased Awareness) allows users tolog and track available in-home ingredients as well as theirwasteful behaviors. Based on this information, the systemwould help users to redirect behaviors, through social in-fluence, towards more sustainable food related practices interms of food waste. The main function of the system is todetect potential food waste and respond by providing socialrecipes before the food get wasted. Social recipes containavailable ingredients from different households that need tobe consumed in time. In this sense, it would target preventionat the collective community level. The promotion of socialinteraction is expected to gain more effective food wasteprevention as it provides a new pleasurable experience aroundfood practices. The next section is to clarify our currentdevelopment progress and how the system will be tested anddeployed in future work.

A. Apparatus

For the logging and tracking of in-home ingredients andwasteful behaviors, we have developed a mobile applicationfor iOS and Android with a hybrid approach using PhoneGap(See Figure 1). At the first log in, users can set their userprofiles including their demographics. On a daily basis, userscan search and select ingredients and move it to their wishlist or stock list. In these lists, users can indicate the amounts(in weights, numbers or liters), move items from their wishlist to their stock list when an item is bought and indicateconsumption in the stock list. Whenever an item is wastedusers can select the reasons of disposal. A survey is integratedin the application to measure the perception of control inwasteful behaviors.

We used JQueryMobile, HTML and CSS for the userinterface of the mobile application and JavaScript for the userinteraction. The server side was developed with the PLAYframework for JAVA. The data flow between the client andthe server is carried with JSON objects. Data from the usersare stored in the local database of the smart phones by usingan SQLite database engine and sent to the server databasewhich is provided by the PLAY framework (when the smartphone is connected to the internet).

The mobile application can be used in two ways: to providethe social recipe module with the available ingredients thatmight be likely to get wasted and when a social recipe isprovided, it can be used to see whether there are changesin available items or wasteful behaviors. This would allow

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 21

Page 22: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Fig. 1. Interface design of the mobile application.

us to evaluate the system. Before integrating the module anduser interface for social recipe suggestions in the mobileapplication, we first need to explore how to design the behaviorof the system which is the main contribution of this paper.In the next section, our user-centered approach, findings andits implications on the design of the community-based socialrecipe system are explained.

IV. USER STUDIES

A. Exploratory field study

The exploratory field study took place from mid-Februaryto mid-April 2014 with participants living in urban areas inthe Netherlands. We had the following objectives:

• Identify amounts and types of food waste as well as thereasons of wastage.

• Explore users’ expected experiences of a community-based social recipe system.

• Relate findings with design implications for the behaviorof the system in optimizing food waste by means of recipesuggestions.

1) Methodology: At the beginning of the study, we askedparticipants to collect their grocery receipts that were laterused as cues for biweekly retrospective interviews on theirfood wastage during the past weeks. Participants received asmall box to store the receipts and a black marker to cover anyitem that was considered private. They were further providedwith a table bin and a kitchen scale and were asked to weighthe waste every time before emptying the bin and to writedown the grams on a log sheet. The log sheet was replacedafter each interview. Participants were also asked to separateorganic waste from other generated waste (e.g. plastics, paperetc.) and were instructed to include all edible as well as non-edible parts of food items such as bones, tea bags, egg shells

and banana peels. This was done to prevent differences inthe definition of edibility. Participants were interviewed twice,individually, in couples or in the presence of other groupmembers, depending on their living circumstances. Overall, weaimed at familiarizing with users’ reasons of waste and socialpractices around food such as shared activities in shopping,paying for shared groceries, cooking and eating. During thelast interviews, a description of Euphoria was explained tousers verbally in a hypothetical fashion to gather their expectedexperiences and initial ideas about the concept.

2) Participant demographics: The study was carried outwith 28 national and international students and young pro-fessionals in the age range of 22 to 31. Participants weresubdivided into 8 groups based on different levels of proximity,i.e. living together and sharing the same kitchen, living in thesame complex or living in the same city. We did not specifyany requirements on the participation other than being a stu-dent or a young professional living in urban areas. Participantswere recruited through social and personal networks and werevisited at their homes after work hours by the same researcherand were compensated by means of vouchers. The followingprovides a description of each group of participants.

Group A; consisted of 4 international students living insingle studios on campus. 3 students were from India and 1from China. They all mainly cook for themselves during theweek for two to four days.

Group B; consisted of 3 international students living inan apartment with a shared kitchen. 2 students were fromPortugal and always do groceries and dinners together. Theother student from Germany mainly cooks for herself as sheis a vegetarian unlike the others.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 22

Page 23: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Group C; consisted of 2 international couples of youngprofessionals, each living in different apartments, but in thesame city. One couple came from Turkey and the other couplefrom Australia and Turkey respectively. The first couple ismarried and all are friends of each other.

Group D; consisted of 5 international students from India.All are friends of each other but are living in different houses.Two live in a studio, one with 2 Dutch students in an apartmentwith a shared kitchen and the last two live together in aninternational house for 7 students with one shared kitchen.

Group E; are good friends living in a house with a sharedkitchen. One is doing a PhD while the others were graduating.They are Dutch, have similar friends and travel togetherregularly.

Group F; consisted of 2 Dutch young professionals whoare living together with a shared kitchen. They describethemselves as very busy, which was given as a reason fornot cooking and eating home often.

Group G; consisted of 3 Dutch female students who areliving together in a student house with a shared kitchen. Theyare good friends and are part of a sorority club. Overall, theyare socially very active and have dinners in big groups at leastonce a week.

Group H; consisted of 5 Dutch female students who are alsoliving together in a student house with a shared kitchen. Allare members of a sports club and are also socially very active.They cook and eat together regularly.

B. Focus group

In addition to the exploratory field study and as a follow up,a double-blind focus group study was conducted with six PhDstudents and one moderator. A limitation of the exploratoryfield study is the awareness of the purpose of the study isthat participants were aware of the purpose of the study,which could have had an effect on their overall behavior andcomments. We were interested to see how participants wouldrespond to the system without immediately relating it to thenegative behavior around food waste. To keep the moderatorin a neutral position, a double-blind procedure was used toguard against experimenter bias and influences.

1) Methodology: After some warm-up questions about foodexperiences in general, participants were asked about fooditems they had available at home. They were also askedwhether they would want to exchange these items with othersand/or to combine it with other peoples’ food items into ameal. Next, the concept was presented with the followingdescription of the social recipe recommendation system:

’Imagine a system that knows which foods you have in yourhouse, which foods your friends have in their homes, and that

can suggest you to get together with your friends to make arecipe with the available ingredients without having to go tothe grocery store.’

Participants were then asked several questions regardinghow the system would affect them, who they would like touse this system with, and how they envision this system wouldaffect their group of friends. Two researchers attended thesessions for observation and the sessions were video recorded.

2) Participant demographics: Participants for this studywere recruited based on several requirements: first, they allhad to live with at least another person at home. Second, theyhad to cook at home at least three times a week. Third, theyshould be eating and cooking with friends at least twice aweek and finally, they had to do groceries themselves. Thestudents were all from China living in the Netherlands, andthey were compensated with lunch. Our choice for selectingChinese students is because of their cooking culture; they cookregularly in social settings. Although, much less food is wastedat the consumer level in non-Western countries (low-income)[1], as the world largest emerging economy, China is startingto suffer a high wastage of food during consumption [26].In the next section, we will mainly discuss findings from theexploratory field study unless indicated otherwise (i.e. fromthe focus group).

V. STUDY FINDINGS

A total of 231 food items were wasted over the whole studyperiod excluding drinks (other than milk), desserts, cookies,and confectioneries. A food item was defined as equal to asingle fruit or vegetable such as one banana or one cabbage, abasket of smaller fruits or vegetables such as cherry tomatoesor grapes, or one portion of rice or pasta. Each reported fooditem was further categorized into different food groups: fruits,grains, dairy, vegetables, meat and fish, or other (e.g. sandwichspreads). Almost half of all the wasted items were vegetables.These vegetables were wasted partly with an average of 64percent of the whole item. This finding supports the choice oftargeting ingredients, specifically perishables.

A. Food group in relation to the reason of wasting

We used thematic analysis to categorize the reasons thatwere provided for wasting:

• Way of consumption; includes items that were used onlyfor flavoring or parts were cut away because of the recipe.

• Items gone badly; includes all items with visual charac-teristics of decay such as mold, decoloration, or growthsthrough the skins, for example, in potatoes. These couldfurther be caused by forgetfulness, busy lives, too bigpurchases, unpredictability of longevity, change of mealplans, the weather, etc.

• Doubtful items; includes visual unattractiveness such asdrought or over-moisture, expiration dates, items thatwere left open in the kitchen for one or several days and

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 23

Page 24: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

were not trusted anymore in terms of quality, and itemsthat were just considered old and had been in the fridgefor a long time. These could also further be caused byforgetfulness, busy lives, social activities or knowledge.

• Dealing with leftovers; includes cooked or prepared in-gredients that were left after dinner but not worthwhilesaving (e.g. too little to save or not tasting good) orleftovers without plans for being used in the near future.This category also includes meals that were saved forseveral days with the intention of usage but were even-tually forgotten (cf. causes above).

• Other; other reasons include the way of saving items(e.g. without foil), food items that were partly bad at thetime of purchase, unexpected taste of items, difficulty ingetting it out of a package or simply just due to a badfridge or not using a non-stick pan.

Vegetables were found to be wasted due to physical dete-rioration (N = 38) or were expected not to be edible andthus doubtful in quality and safety (N = 41). Hence, this alsosupports the potential of targeting vegetables with the socialrecipe system.

1) Implications on the system design: the use of existingingredients to suggest social recipes with the intention of re-ducing food waste can be defined and explored as a constraintsatisfaction problem. The social recipe system should findoptimum recipes that can save food with the potential of beingwasted. Therefore, it should consider minimizing the amountof available ingredients as the most important constraint.

The first implication is on prioritizing ingredients. As userscan enter food items in the mobile application, the system willknow the type of item, its amount, when the item is addedand how long the item has been in the stock list (availability).By comparing the duration of availability with the averagelongevity of the specific food item (which can be derivedfrom a database), different risk levels can be assigned. Wewill distinguish three levels of risk: high (the item is goodfor max. 2 days), medium (the item is good for max. 4 days)or low (the item is good for more than 4 days). Optimizationfunctions can be defined as:

minimizeLH∑i

U∑j

Amount(i, j) (1)

minimizeLM∑i

U∑j

Amount(i, j) (2)

where:• LH: are items with high risk of being wasted.• LM : are items with medium risk of being wasted.• U : is the list of users receiving the recipe suggestion.• Amount(i, j): is the amount of item i, user j has.Equation (1) will have the highest weight in the overall

constraint model while the low risk values will not be takeninto account as a constraint.

A second implication is to match available ingredients withingredients necessary for a specific recipe. In most of thecases, the amount of the available ingredients do not exactlymatch with recipe requirements so in real life, people wouldprobably modify the recipe according to the ingredients theyhave. The system should be specified with a matching criteriato provide suggestions by modifying the amount (i.e. a littlebit less or more of an item should cause no problems) orreplacing it (e.g. chicken instead of beef). When, for example,the amount of each available ingredient is not less than 1/2of the suggested amount in the recipe description and thetotal amount of available ingredients is not less than 2/3of the suggested amount, it can be identified as a possiblemodification. Furthermore, we could also enhance the set ofsuggestions by enabling deletion of ingredients. For instance,if one of the ingredients is missing, a recipe could still besuggested by the system. This decision should depend on theimportance of the ingredients which can be labeled as critical,somewhat important or supportive. Because of complexityhowever, we will initially not include these constrains in ourfirst prototype.

B. Expected experiences

Most participants were enthusiastic about the concept ofsocial recipes, but also noticed disadvantages or detractors.The following reasons were given for using the system,relating clearly to its advantages:

Habit: a number of participants consistently throw awaythe same type of food as a result of bad predictability aboutlongevity at the time of purchase. A system that could helpthem in planning their weekly dinners together, using itemsthat have a constant high potential of being wasted, wasmentioned to be a solution with high potential.

Awareness: busy lives and forgetting was mentioned to bea main reason of throwing away own and housemates’ food.A system that reminds users with their food available at homeand its usage potential is perceived as very useful. Especiallydiscounted food items (e.g. economy packages or buy one getone) often end up being forgotten and wasted.

Creativity: some participants were not necessarily onlyinterested in being remembered of what is available but theywere also interested in knowing the potential usage of itemsthat did not come to mind initially. The system could help themto realize these possibilities and enhance creativity aroundcooking.

Surprise: related with creativity, participants from the focusgroup expect the content as well as the timing of social recipesuggestions to be positive surprises. This would encouragespontaneous meet-ups with fun as a means of motivatingbehavior change.

Coordination: having a platform that increases users aware-ness of availability and at the same time supporting the

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 24

Page 25: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

coordination for shopping as well as cooking was mentionedto be helpful. This would prevent users that are living togetherfrom buying similar or already available items. Coordinationis supported by Ganglbauers’ [5] visibility dimension forcooperation as having potential to organize daily practicesaround food and prevent food waste.

The following advantages were given by the focus group.Connectedness: participants expected the proposed system toprovide more opportunities for seeing friends. The use ofavailable items from different households together into onerecipe and the surprising element of social recipe suggestionswere expected to increase the feeling of being connected.

Education: participants also mentioned the potential ofsocial recipe suggestions in supporting the improvement of in-dividual cooking skills. Users could learn from the informationprovided by the system as well as learn from each other whilecooking together. Social recipes could initiate conversationsamong users when the combination of items is surprising,or when particular items that previously were not planned indinners can now be used.

Negative feedback: on the other hand, participants alsoexpressed negative attitudes towards the system. The followingreasons were given for not using the system.

Preparation values and kitchen constraints; for someparticipants extensive cooking for others was valued as anindividual activity done in advance before the actual dinneras a means of showing hospitality. This preference, however,could also have been influenced by the small space of theirkitchens. Other kitchens aspects like a bad working fridge andlow quality cooking pans were also mentioned to affect foodwaste.

Location: location was an indicator of not using the system.Users might prefer going to the grocery store over collaborat-ing with friends due to convenience, when the grocery storeis located closer to their homes.

The following disadvantage was given by the focus group.Trust: food is very personal and therefore, according toparticipants, the use of the system should only be amongpeople that trust each other. Specifically, it was mentionedthat users should be able to trust the way others handle fooditems before they are shared.

We will further continue with the importance of consideringlocation and trust in the design of the social recipes system.

C. Convenience and importance of location

The system should consider spatial information. If, forexample, a supermarket is located closer than friends’ homes,users might find it easier and more convenient to go to thesupermarket instead. The system should consider the distance

to other users in a group to which a recipe had been suggested.It could also take into account the distances to supermarkets.To increase the attractiveness of a social recipe it couldconsider ingredients from users who are located not muchfarther than the closest supermarket or it could minimize thedistance to be traveled by all users for each suggested recipe.

1) Implications on the system design: the system shouldconsider the postcodes that are entered in user profiles andsuggest accordingly. A constraint value could be defined sothat users do not need to travel more than a predefineddistance. The optimization function can be defined as:

∀uεU : Travel(u, l) ≤ D (3)

where:

• l: is the optimum location.• D: is the maximum distance.• U : is the list of users receiving the recipe suggestion.• Travel(u, l): gives the distance that user u needs to travel

to go to location l.

D. Trust

From our finding, we can distinguish two types of trust: (1)trust in the suggestions provided by the system and (2) trustin other users that has been suggested with the same socialrecipe. For the first type of trust, the system could constructuser profiles based on what users have bought before (whatusers like) and provide recipes with familiar food items. Someparticipants have indicated the importance of receiving sugges-tions according to the foods they like. Another importance is toconsider a balanced diet. The system could provide attractivesuggestions for easy-to-make recipes that are nutritionally bal-anced. Adopting healthy eating patterns are expected to havegreater effects on sustainability than just reducing food waste[27]. Furthermore, the system could also include an ‘expert’friend-like digital agent that knows how long an item will keep(based on databases of average longevity) and communicatesthis information to users. This could create a moment ofquality evaluation before disposal. This social agent couldalso prevent users from buying products that are likely to getwasted (based on previous experiences). Persuasive technologyresearch have shown that social feedback by an embodiedagent can create behavioral change [18]. Our system couldinclude such an embodied or virtual agent that communicateswith users.

For the second type of trust, a parameter can be set for thenumber of users to suggest social recipes to. This value couldbe important as it could affect the acceptance rate (i.e. peoplemay enjoy less crowded dinners or it may be more difficultto coordinate with more people). Also, users should have thecontrol in who they would like to connect with through thesystem for receiving social recipes. Initially in our next studiesthe groups will be predefined.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 25

Page 26: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

1) Implications on the system design: for now we will onlyconsider optimization functions for the balance of nutritionand the number of users. We can distinguish six classes ofnutrients: proteins, fats, carbohydrates, vitamins, minerals andwater, but we will only focus on the first three nutritions;the PFC ratio. According to [28], the ideal protein ratio is10-20 percent, the ideal fat ratio is 20-25 percent and theideal carbohydrate ratio is 50-70 percent depending on age,Basal Metabolic Rate (BMR) and health conditions. For thebalance of nutrition, the following optimization function canbe defined:

∀nεN : Amount(n)ε [nmin, nmax] (4)

where:• N : is proteins, fat and carbohydrates.• Amount(n): is the amount of nutrition n.• nmin: is the minimum nutrition amount somebody should

consume for a dinner.• nmax: is the maximum nutrition amount somebody

should consume for a dinner.

For the number of users, which they could manually indicatepreferences in the mobile application, the following optimiza-tion function can be defined:

∀rεR : Number(r) ≤ C (5)

where:• R: is the list of suggested recipes.• C: is the optimum number for the users of a recipe.• Number(r): gives the number of users who receive

recipe r.

VI. DISCUSSION AND CONCLUSION

This paper contributes to the understanding of the relevanceand desirability of a community-based social recipe recom-mendation system, and the design of the system based on userstudies. These user studies have shown that there is potentialfor a community-based social recipe recommendation systemwhich revealed important aspects to be considered for thedesign of such a system. Its value was found in a varietyof aspects such as to compensate for habitual waste-relatedbehavior, awareness, creativity, the triggering of spontaneousactions and surprises, coordination among users, education andthe connectivity with friends for social food-related activities,building on food sharing. Based on these findings we discussedpossible implications for the design of our community-basedsocial recipe recommendation system.

A. Limitations of the study

Throughout this process, we ran into limitations of ourstudy. For example, although using receipts as cues in ret-rospective interviews for reporting wasted foods is a moreobjective method than surveys, it is still prone to memory-recall biases. Reported wasted items, for example, are mainlyrough estimates. With the mobile application for logging and

tracking, we expect to greatly improve the accuracy of mea-surements and quantification as users can immediately enterthe usage or wastage of items after cooking. We are, however,also aware that it is important for users to be motivated to enterthis information, which is expected to depend on the perceivedvalue of social recipe suggestions. A solution for this couldbe to target specific participants in future studies for user andsystem evaluation. We could, for example, recruit participantsfrom food sharing communities or those who are alreadysustainable and are therefore interested in using our system.Another possibility is to recruit participants who are alreadyused to using food related applications (e.g. sporters who aretracking their nutrition). People could also be instructed toonly enter those items they would not mind sharing. Themanual logging of food waste is another limitation, as it mightreduce its frequency. Therefore, we are currently working onalso automating the measurement of food waste through anaugmented bin to weight the waste. This is expected to provideus with more accurate food waste data. Furthermore, thegroup sizes of our participants are small. We should approachbigger communities of interconnected people with differentinterpersonal ties. A bigger network of users would correspondto a more realistic setting and could provide different aspectsto consider. The challenge is how to target such numbers ofusers for testing purposes.

B. Future work

Currently, the optimization functions for the system are openfor revisions and changes. Before finalizing the functions andits implementation we will first deploy the mobile applicationin a second user study with social recipes suggested to usersmanually to measure its effects on food related behavior. Thecollected food data will then be used to test the optimizationfunctions in a simulation study which results can be comparedwith the results derived from the user study. In the user studyour main interest lies in how social recipes affect food wasteand the social dynamics around food related practices. Wewill also explore how social recipes affect perception, environ-mental attitude, social values and general sustainable behavior.The objective of the simulation study is to explore how recipesuggestions could be improved through optimization functions.A system and user evaluation comes with challenges; forexample, getting sufficient data. To compensate with smalldata sets, we are planning on applying Bayesian approachesfor data analysis.

ACKNOWLEDGMENT

We would like to thank all participants for their hospitalityand collaboration. This work is supported in part by theErasmus Mundus Joint Doctorate in Interactive and CognitiveEnvironments (ICE), which is funded by the EACEA Agencyof the European Commission under EMJD ICE FPA n 2010-0012.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 26

Page 27: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

REFERENCES

[1] A. Gustavsson, Jenny; Cederberg, Christel; Sonesson, Ulf; Otterdijkvan, Robert; Meybeck, “Global food losses and food waste,” Food andagriculture organization of the united nations, Tech. Rep., 2011.

[2] W. Moomaw, T. Griffin, K. Kurczak, and J. Lomax, “The critical role ofglobal food consumption patterns in achieving sustainable food systemsand food for all,” United Nations Environment Programme, Tech. Rep.,2012.

[3] C. Mobley, W. M. Vagias, and S. L. DeWard, “Exploring AdditionalDeterminants of Environmentally Responsible Behavior: The Influenceof Environmental Literature and Environmental Attitudes,” Environmentand Behavior, vol. 42, no. 4, pp. 420–447, Oct. 2009.

[4] T. L. Milfont, J. Duckitt, and L. D. Cameron, “A Cross-CulturalStudy of Environmental Motive Concerns and Their Implications forProenvironmental Behavior,” Environment and Behavior, vol. 38, no. 6,pp. 745–767, Nov. 2006.

[5] E. Ganglbauer, G. Fitzpatrick, and G. Molzer, “Creating Visibility :Understanding the Design Space for Food Waste,” in Proceedings of the11th International Conference on Mobile and Ubiquitous Multimedia,2012, pp. 0–9.

[6] E. V. A. Ganglbauer, G. Fitzpatrick, and R. O. B. Comber, “NegotiatingFood Waste : Using a Practice Lens to Inform Design,” ACM Transac-tions on Computer-Human Interaction, vol. 20, no. 2, pp. 1–25, 2013.

[7] A. Olofsson and S. Ohman, “General Beliefs and Environmental Con-cern: Transatlantic Comparisons,” Environment and Behavior, vol. 38,no. 6, pp. 768–790, Nov. 2006.

[8] R. Comber, J. Hoonhout, A. van Halteren, P. Moynihan, and P. Olivier,“Food Practices as Situated Action : Exploring and designing foreveryday food practices with households,” in Proceedings of the SIGCHIConference on Human Factors in Computing Systems, 2013, pp. 2457–2466.

[9] E. Ganglbauer, W. Reitberger, and G. Fitzpatrick, “An Activist Lens forSustainability : From Changing Individuals to Changing the Environ-ment,” in Persuasive, 2013, pp. 63–68.

[10] V. Lim, F. Yalvac, M. Funk, J. Hu, and M. Rauterberg, “Can we reducewaste and waist together through EUPHORIA ?” in The Third IEEEInternational Workshop on Social Implications of Pervasive Computing,2014, pp. 382–387.

[11] H. Brynjarsdottir, M. Ha kansson, J. Pierce, E. P. S. Baumer, C. Disalvo,and P. Sengers, “Sustainably Unpersuaded : How Persuasion NarrowsOur Vision of Sustainability,” in Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems, 2012, pp. 947–956.

[12] J. Froehlich, L. Findlater, and J. Landay, “The Design of Eco-FeedbackTechnology,” in Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems, 2010, pp. 1999–2008.

[13] J. Pierce, W. Odom, and E. Blevis, “Energy Aware Dwelling : A CriticalSurvey of Interaction Design for Eco-Visualizations,” in Proceedingsof the 20th Australasian Conference on Computer-Human Interaction,2008, pp. 1–8.

[14] T. Holmes, “Eco-visualization : Combining art and technology to reduceenergy consumption,” in ACM SIGCHI conference on Creativity &cognition, 2007, pp. 153–162.

[15] J. Froehlich, S. Patel, J. a. Landay, L. Findlater, M. Ostergren,S. Ramanathan, J. Peterson, I. Wragg, E. Larson, F. Fu, andM. Bai, “The design and evaluation of prototype eco-feedback displaysfor fixture-level water usage data,” in Proceedings of the SIGCHIConference on Human Factors in Computing Systems. New York, NewYork, USA: ACM Press, 2012, pp. 2367–2376. [Online]. Available:http://dl.acm.org/citation.cfm?doid=2207676.2208397

[16] J. Maitland, M. Chalmers, and K. A. Siek, “Persuasion not RequiredImproving our Understanding of the Sociotechnical Context of DietaryBehavioural Change,” in PervasiveHealth, no. Cvd, 2009, pp. 1–8.

[17] S. Foster, Derek; Lawson, “ Liking Persuasion : Case studies in SocialMedia for Behaviour Change,” in Extended Abstracts on Human Factorsin Computing Systems, 2013, pp. 1–8.

[18] C. Midden and J. Ham, “Chapter 23 Persuasive technology to promoteenvironmental behaviour,” in Environmental Psychology; an introduc-tion, 2013, pp. 243–254.

[19] A. Grimes and R. Harper, “Celebratory Technology : New Directionsfor Food Research in HCI,” in Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems, 2008, pp. 467–476.

[20] S. Gross, A. Toombs, J. Wain, and K. Walorski, “Foodmunity : Design-ing community interactions over food,” in Extended Abstracts on HumanFactors in Computing Systems, Gross2011, 2011, pp. 1019–1024.

[21] M. Svensson, K. Hook, and R. Coster, “Designing and Evaluating Kalas: A Social Navigation System for Food Recipes,” ACM Transactions onComputer-Human Interaction, vol. 12, no. 3, pp. 374–400, 2005.

[22] A. Grimes, M. Bednar, J. D. Bolter, R. E. Grinter, and C. S. Atlanta,“EatWell : Sharing Nutrition-Related Memories in a Low-Income Com-munity,” in ACM Conference on Computer-Supported Cooperative Workand Social Computing, 2008, pp. 87–96.

[23] P. Barden, R. Comber, D. Green, D. Jackson, C. Ladha, T. Bartindale,N. Bryan-kinns, T. Stockman, and P. Olivier, “Telematic Dinner Party: Designing for Togetherness through Play and Performance,” in Pro-ceedings of the Designing Interactive Systems Conference, 2012, pp.38–47.

[24] H. Kanai and K. Kitahara, “A Menu-planning Support System toFacilitate Communication among Neighbors,” in ACM Conference onComputer-Supported Cooperative Work and Social Computing, 2011,pp. 661–664.

[25] E. Ganglbauer, G. Fitzpatrick, O. Subasi, and F. Guldenpfennig, “ThinkGlobally , Act Locally : A Case Study of a Free Food SharingCommunity and Social Networking,” in ACM Conference on Computer-Supported Cooperative Work and Social Computing, 2014, pp. 911–921.

[26] G. Liu, X. Liu, and S. Cheng, “Food security: Curb China’s rising foodwastage,” Nature, vol. 498, p. 170, 2013.

[27] M. M. Rutten, “Reducing food waste by households and in retail in theeu: A prioritisation on the basis of economic, land use and food securityimpacts,” in Proceedings of the First International Conference on FoodSecurity, 2013, noordwijkerhout, Netherlands.

[28] C. Nishikawa, A. Nagai, T. Ito, and S. Maruyama, “ContemporaryChallenges and Solutions in Applied Artificial Intelligence,” vol. 489, pp.55–60, 2013. [Online]. Available: http://link.springer.com/10.1007/978-3-319-00651-2

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 27

Page 28: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Rainfall-Runoff relationship for streamflow discharge

forecasting by ANN modelling

Sirilak Areerachakul

Faculty of Science and Technology

Suan Sunandha Rajabhat University Bangkok, Thailand

[email protected]

Prem Junsawang

Department of Statistics, Faculty of Science

Khon Kaen University Khon Kaen, Thailand

[email protected]

Abstract— Rainfall-runoff modeling has been considered as one

of the major problems in water resources management, especially

in most developing countries such as Thailand. Artificial Neural

Network (ANN) models are powerful prediction tools for the

relation between rainfall and runoff parameters. Lam Phachi

watershed is located in Western Thailand. In each year, people

usually undergo drought problem in dry season or flooding

problem in wet season due to the influence of the monsoon

leading to soil erosion and sediment deposition in the watershed.

The goal of this work is to implement ANN for daily streamflow

discharge forecasting in Lam Phachi watershed, Suan Phung,

Rachaburi, Thailand. For model calibration and validation, two

time series of rainfall and discharge are daily recorded from only

one hydrologic station (K. 17) in water years 2009-2012. The data

from the first three years are used as the training dataset and the

last year are used as the test dataset. The results showed that the

coefficient of determination (R2) of ANN equal to 0.88. On the

other hand, these results could be applied to solve the problems

in water resource studies and management.

Keywords- rainfall-runoff; forecasting; artificial neural

network

I. INTRODUCTION

Thailand has long been one of the world’s major agricultural countries in producing and exporting rice, rubber, and so on. Based on the reports by the Department of Water Resources and the Department of Groundwater Resources, Ministry of Natural Resources and Environment, Thailand has been periodically suffered flood and drought [1]. It is urgent for the country to call for the water resources plan and management using advanced technologies. Modeling of the rainfall–runoff process has been the subject of research among hydrologists and engineers for a very long time. The transformation of rainfall into runoff is an extremely complex, dynamic, and non-linear process which is affected by many inter-related factors [2]. In recent years, Artificial Neural Networks (ANNs) have become extremely popular for prediction and forecasting in a number of areas, including finance, power generation, medicine, water resources and environmental science [3]. Although, the concept of artificial neurons were first introduced in 1943 by McCulloch et al. [4]. ANNs applications have blossomed since the introduction of the backpropagation training algorithm for feedforward ANNs

in 1986 [5]. Thus, ANNs are considered as a fairly new tool in the field of prediction and forecasting. Coulibaly et al. [6] proposed multilayer perceptron (MLP) neural network to forecast real-time reservoir in flow. Castellano-Mendez et al [7] studied the hydrological behavior of the Xallas basin in the northwest of Spain based on Box-Jenkins method and ANNs. Their experimental results showed that the performance of the ANN outperformed those of the Box-Jenkins and they observed that the neural network model was capable to model a complex rainfall-runoff relationship. Sohail et al. [8] compared ANN to multivariate auto regressive moving average method (MARMA) in a small watershed of Tono area in Japan during wet and dry seasons. Their results showed that ANN models provided the better results in wet seasons when the nonlinearity of RR process was high. Ghumman et al. [9] compared ANN to a mathematical conceptual model in runoff forecasting on a watershed in Pakistan. They concluded that ANN model was an important alternative technique to conceptual models and it could be used when the range of collected data was short and data were of low standard. Among these ANN applications, max-min normalization has usually been used for data preprocessing before ANN model was created. The objective of the study presented in this paper is to apply the technique of ANNs for rainfall-runoff forecasting.

This paper is organized as follows: Section 2 describes the description of the study area used in the experiments. Section 3 contains the methodology. The experimental and results are shown in Section 4. Finally, Section 5 concludes the paper.

II. DESCRIPTION OF THE STUDY AREA

Lam Phachi watershed is a tributary of Mae Klong river which is located Western Thailand and borders with the Union of Myanmar. The watershed area is approximately 2,634 km2 and it is bordered by a mountain range. The topography is the western part of the watershed is mainly mountainous and hilly [10]. Lam Phachi River basin is approximately 142 kilometers long. In each year, people usually undergo drought problem in dry season or flooding problem in wet season due to the influence of the monsoon leading to soil erosion and sediment deposition in the watershed. Lam Phachi hydrologic station is located at Ban Lam Phachi, Suan Phung as shown in Figure 1. The station is far from its water source approximately 94

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 28

Page 29: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

kilometers. The site is equipped with automatic water level gauge and managed by Royal Irrigation Department. The daily rainfall (mm) and discharge (cubic meters per second, CMS) collected from April 1

st, 2009 to April 1

st, 2013. There is no

missing value of both rainfall and discharge.

Figure 1 Location of Lam Phachi hydrologic station (K.17)

III. METHODOLOGY

ANN, data preprocessing and statistical evaluation are described in this section.

A. Artificial Neural Network

Artificial neural networks (ANN) have also been used to model the rainfall–runoff process. The ANN is a black box model and learns the association between the rainfall and runoff patterns measured for past storm events to make predictions for future events [11]. A neural network can be used to represent a nonlinear mapping between input and output vectors. Neural networks are among the popular signal-processing technologies. In engineering, neural networks serve two important functions: as pattern classifiers and as nonlinear adaptive filters [12] [13]. A general network consists of a layered architecture, an input layer, one or more hidden layers and an output layer [14]. Fig. 2 shows a typical architecture of a multilayer perceptron network [15]. The multilayer perceptron (MLP) is an example of an artificial neural network that is used extensively to solve a number of different problems, including pattern recognition and interpolation [16] [17]. Each layer is composed of neurons, which are interconnected with each other by weights. In each neuron, a specific mathematical function called the activation function accepts input from previous layers and generates output for the next layer. In the experiment, the activation function used is the sigmoid transfer function [18] which is defined as in (1):

f s =1

1 se (1)

where i

n

i

ii xws

1

, in which iw are weights and

ix are

input values.

The MLP is trained using the Levenberg–Marquardt technique as this technique is more powerful than the conventional gradient descent techniques [16].

Figure 2 A typical multilayer perceptron ANN architecture

B. Data Preprocessing

At the initial stage of the experiment, data were scaled by

using min-max normalization, it is mathematically

expressed by [19]

omin

min max min

max min

o

n n n n

o o

x xx x x x

x x

(2)

where ox and nx denote the original and transformed data,

whereas maxox and minox denote the maximum and minimum

values of original data, respectively. In this study, n minx and

n maxx were set to 0.2 and 0.8, respectively.

C. Statistical Evaluation

Four standard statistics criteria are adopted to evaluate model performance in rainfall-runoff forecast. The performance of forecasting model is expressed in terms of root

mean square error RMSE , average absolute relative

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 29

Page 30: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

error AARE , correlation R and coefficient of

determination 2R . These are given as follows:

2

1

1 ˆn

t t

t

RMSE Q Qn

(3)

1

ˆ1 nt t

t t

Q QAARE

n Q

(4)

1

22

1 1

ˆ ˆ( )

ˆ

n

t t

t

n n

t t

t t

Q Q Q Q

R

Q Q Q Q

(5)

1 1 12

2 2

2 2

1 1 1 1

ˆ ˆ( )

ˆ ˆ

n n n

t t t t

t t t

n n n n

t t t t

t t t t

n Q Q Q Q

R

n Q Q n Q Q

(6)

where Q , ˆtQ , Q , and Q denote the tht observed discharge,

predicted discharge, observed discharge mean, and predicted discharge mean, respectively.

IV. EXPERIMENTAL AND RESULTS

In this section, the daily rainfall and discharge were used

for model calibration (training) and validation (test). Both

rainfall and discharge were divided into two parts. The data

from water year 2009 to 2011 were used for calibration and

the data from water year 2012 were used for validation. In this

work, the correlation analysis technique was preliminary

applied to obtain the set of candidate model inputs. For daily

rainfall, the crosscorrelation analysis was applied to measure

the strength of linear relationship between two time

series (rainfall and discharge). For daily discharge, the autocorrelation was applied to measure the correlation of one

time series with itself. The results of input determination and

the results of 1-day ahead streamflow discharge forecasting

were provided.

A. Results of Input determination

Let C(k) represent the crosscorrelation value between daily

rainfall and discharge at lag time k and A(k) represent the

autocorrelation within daily discharge at lag time k. In this

study, the lag time k varied from 1 to 10. From Table I, it was

shown that the computed crosscorrelations of k = 1 and k = 2

(C(1) = 0:48 and C(2) = 0:49) were higher than the others of k

and also the autocorrelation of k = 1 and k = 2 (A(1) = 0:85

and A(2) = 0:65) were higher significantly than the others of k

as shown in boldface. Suppose Qt and Pt denoted the discharge

and rainfall at day t. For forecasting 1-day ahead streamflow

discharge Qt+1, the candidate model inputs were formed by the

combination of the set of current and previous rainfall and

discharge Pt, Pt+1, Qt and Qt+1. These candidate model input

types with function expression were given in Table II.

TABLE I. CROSSCORRELATION C(K) BETWEEN DAILY RAINFALL AND

DISCHARGE, AND AUTOCORRELATION, A(K,) WITHIN DAILY DISCHARGE AT

ANY LAG TIME K.

k 1 2 3 4 5 6 7 8 9 10 C(k) 0.48 0.49 0.41 0.33 0.32 0.3 0.25 0.18 0.18 0.16

A(k) 0.85 0.65 0.51 0.44 0.39 0.34 0.31 0.3 0.3 0.3

TABLE II. CANDIDATE MODEL INPUT TYPES WITH FUNCTION EXPRESSION

Input Type Function Expression

Type I Qt+1 = F1(Qt , Pt)

Type II Qt+1 = F1(Qt , Pt, Qt-1)

Type III Qt+1 = F1(Qt , Pt, Pt-1)

Type IV Qt+1 = F1(Qt , Pt,, Qt-1 , Pt-1)

B. Results of Streamflow Discharge Forecasting

For each model input, MLP neural networks with various

network structures for discharge forecasting were constructed.

The numbers of hidden neurons varied from 5 to 20 neurons.

Levenberg Marquardt (LM) method was adopted as the

learning algorithm because of its fast in convergence speed

[18]. The goal of learning was investigated by testing the

various values of mse to 0.001, 0.005 and 0.0001. The

maximum number of epoch was set to 5000. The optimal

network structure for each model input was given by trial and

error criterion. For each input type and each transformation

method, the optimal network structure was selected by the

network with the highest value of R2. The results of four

statistical performance measures of each model input with its optimal network structure were shown in Table III. The model

input Type II with network structure 3-19-1 provided the best

values in three statistical measures where R2 = 0.88, R = 0.95

and RMSE = 6.55 in 1- day ahead discharge forecasting as

shown in Table 3. Only AARE, the value of Type II were more

slightly than that of Type I and Type II. Comparison results

between observed and predicted discharges in year 2012 were

graphically expressed in Fig. 3.

TABLE III. STATISTICAL RESULTS COMPARISON OF 1-DAY AHEAD

DISCHARGE FORECASTING FOR FOUR CANDIDATE INPUTS WITH ITS OPTIMAL

NETWORK STRUCTURE

Input Type Network Struture AARE RMSE R2 R

Type I 2-17-1 0.21 7.93 0.83 0.93

Type II 3-19-1 0.24 6.55 0.88 0.95

Type III 3-5-1 0.21 7.86 0.83 0.95

Type IV 4-17-1 0.27 7.63 0.84 0.93

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 30

Page 31: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Figure 3: Comparison between observed and predicted discharges during wet

period in year 2012.

V. CONCLUSIONS

In this paper, implement ANN for daily rainfall and discharge forecasting in Lam Phachi watershed.Suan Phung.Rachaburi, Thailand. For input determination, cross-correlation used between daily rainfall and discharge and autocorrelation used within daily discharge were applied to primitively obtain the candidate inputs. The results showed that the appropriate model inputs (Type II), for 1-day ahead discharge forecasting, included current rainfall and discharge, and previous discharge. The coefficient of determination (R

2)

of model input Type II with its network structure 3-19-1was equal to 0.88. This encouraging result could provide a very useful and accurate tool to solve problems in water resource management.

ACKNOWLEDGMENT

The authors would like to thank National Research Council of Thailand for the scholastic financial support. Thanks to Royal Irrigation Department for all of the experimental data. Also, thanks to Suan Sunandha Rajabhat University and Khon Kaen University for facility support.

REFERENCES

[1] P. Junsawang, J. Asavanant, C. Lursinsap, Artificial neural network for

rainfall-runoff relationship, Master's thesis, Department of Mathematics

and Computer Science, Faculty of Schience, Chulalongkorn University

(2007).

[2] S. Srinivasulu and A. Jain, “A comparative analysis of methods for

artificial neural network rainfall-runoff models,” Applild Soft Computing, vol. 6, 2006, pp.295–306.

[3] I. S. Jacobs and C. P. Bean, “Neural netwoks for the prediction and

forecasting of water resources variables: a review of modelling issues

and applications,” Environmental Modelling and Software, vol. 15, 2000, pp. 101–124.

[4] W.S. McCulloch, , W. Pitts, “A logical calculus of the ideas imminent in

nervous activity,” Bulletin and Mathematical Biophysics, vol. 5, 1973, pp.115–133.

[5] D.E. Rumelhart, G.E. Hinton and R.J. Williams, “Learning

representations by backpropagating errors,” Nature , Vol.323, 1986, pp.533–536.

[6] P. Coulibaly, F. Anctil, B. Bobe, “Daily reservoir inow forecasting using

artificial neural networks with stopped training approach,” Journal of Hydrology, 230 (34),2000, pp.244 -257.

[7] M. Castellano-Mndez, W. Gonzlez-Manteiga, M. Febrero-Bande, J. M.

Prada-Snchez, R. Lozano-Caldern, “Modelling of the monthly and

dailybehaviour of the runoff the xallas river using boxjenkins and neural networks methods,” Journal of Hydrology, 296 (14), 2004, pp. 38-58.

[8] A. Sohail, K. Watanabe, S. Takeuchi, “Runoff analysis for a small

watershed of tono area japan by back propagation artificial neural

network with seasonal data,” Water Resources Management 22 (1) 2008, pp.1-22.

[9] A. Ghumman, Y. M. Ghazaw, A. Sohail, K. Watanabe, “Runoff

forecasting by artificial neural network and conventional model,” Alexandria Engineering Journal vol.50, 2011, pp 345-350.

[10] H. Sugiyama, V. Vudhivanich, K. Lorsirirat, A. C. Whitaker, “Factors

affecting hydrologic characteristics in the lam pachi river basin in:

Workshop on Watershed Degradetion and Restoration of The Lam Phachi River Basin,” 2002.

[11] H.C. Lloyd, S.W. Tommy, “Improving event-based rainfall–runoff

modeling using a combined artificial neural network–kinematic wave approach,” Journal of Hydrology, (390), 2010, pp.92 -107.

[12] L.Khuan, N.Hamzah and R Jailani, “Prediction of Water Quality

Index(WQI) Based on Artificial Neural Network(ANN)”,Conference on Research and Development Proceedings, Malasia, 2002, pp. 157-161.

[13] A.Najah, A.Elshafie,O.Karim and O.Jaffar “Prediction of Johor River

Water Quality Parameter Using Artificial Neural Networks”, Journal 0f Scientific Research, EuroJournals Publishing, 2009, pp. 422-435.

[14] Chi Zhou, Liang Gao and Chuanyong Peng, “Pattern Classification and

Prediction of Water Quality by Neural Network with Particle Swarm

Optimization”, Proceedings of the 6th World Congress on Intelligent Control and Automation, China, June 2006, pp. 2864-2868.

[15] S.Areerachakul and S.Sanguansintukul “Water Classification Using

Neural Network: A Case Study of Canals in Bangokok, Thailand”, The

4th International Conference for Internet Technology and Secured Transactions (ICITST-2009), United Kingdom, 2009.

[16] Simon Haykin, “Neural Networks:A Comprehensive foundation second edition”, Pearson Prentice Hall, Delhi India, 2005.

[17] S.H.Musavi and M.Golabi “Application of Artificial Neural Networks in

the River Water Quality Modeling: Karoon River,Iran”, Journal 0f

Applied Sciences, Asian Network for Scientific Information, 2008, pp. 2324-2328.

[18] S. Haykin, Neural networks: A comprehensive foundation., 2nd Edition,

Prentice-Hall International, Inc., 1999.

[19] N. Sajikumar, B. Thandaveswara, A non-linear rainfall-runoff model

using an artificial neural network, Journal of Hydrology 21 (6), 1999, pp.32-35.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 31

Page 32: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Performance of Granular Activatedcarbon comparing

with Activated Carbon (bagasse) Biofiltration in

Wastewater treatment Nathaporn Areerachakul

Faculty of Industrial Technology, Suansunandha Rajhabhat University Bangkok Thailand

[email protected]

Abstract— Organic matters known as major problems in waste

water. In this study GAC and Activated Carbon from Bagasse,

agriculture waste, were used to treat organic pollutants from

waste water. Bagasse waste are prepare activated carbons by zinc

chloride activation under three different activation atmosphere,

to develop carbons with substantial capability, and to compare

efficiency with GAC for long period. The characterizations of

activated carbon were studied by iodine number. The

performance of activated carbon biofilter is influenced by many

operational conditions, such as filter medium type and size,

filtration velocity, filter depth and porosity. Even after a long

time of operation (42 days), GAC biofilter consistently

maintained an organic removal efficiency of 60% even with a

shallow filter depth of 300 mm. The change of influent

concentration also affected the organics removal in the GAC

biofilter. The filters which were fed with wastewater of higher

organic concentration had a better TOC removal efficiency

because of the increase in organisms’ activities as they receive

more nutrients. The daily backwash to remove the solids did not

affect the biofilm, and thus the organic removal. The GAC

medium used in the biofilter led to better organic removal, as compared to activated carbon (from bagasse)

Keywords-biofilter; activated carbon; organic; bagasse

I. INTRODUCTION

Organic matters present in all kinds of water, particularly in wastewater. In water, organic matters are usually quantified by biological oxygen demand (BOD5), chemical oxygen demand (COD), biodegradable organic carbon (BDOC) and total organic carbon (TOC) measurement. The presence of organic matters in water, even in a low concentration can affect directly to water quality. Natural organic matters (NOM) in water are the source of nutrient for aquatic microorganisms including opportunistic pathogens regrowth in the distribution systems. NOM also react with disinfectants such chlorine and ozone to form potential carcinogenic and harmful disinfection by-products. In addition, NOM can impair the color, odor and taste of water [1]. Even though they can be removed in a large portion by conventional wastewater treatment processes, it is difficult to be removed them completely. Therefore, NOM removal is important in advanced water treatment to meet the water quality requirement.

Biofilter is one of the water treatment processes that can effectively remove organic matters that cannot be removed in

the conventional sewage treatment. The function of the biological filter is based on the activities of micro-organisms community that is attached onto filter media. Organic substances in the influent are adsorbed and biodegraded by those microbes. Many studies showed that biofilter can remove the majority of organic matters from water and wastewater with less operation and maintenance requirement [2, 3]. Therefore, biofilter is more suitable than other treatment methods. Moreover, biological filtration is economical and safe for environment.

The biological filtration using granular activated carbon (GAC) is an efficient process in drinking water treatment. Previous studies showed that biological filters using GAC has a great potential in remove disinfectant by products, biodegradable organic and synthetic substances [3]. Reference [4] stated that enhanced coagulation and GAC were proposed as the best available technologies for precursor control.

GAC has an extremely large and irregular surface of the order of several hundred m2/g of carbon that provides a large number of available sites for microorganisms to attach on them and grow [3]. The GAC structure also protects microbes from shear loss during its operation. In the initial stage of operating the biofilter, adsorption of substances and micro-organisms is the major activity while in the later stages, organic degradation by microbial activities is more important. Microbes growing onto GAC include bacteria, protozoa. Although there are many studies on biofilter, behaviors of microbes during the filtration has still not explained clearly due to the variety of microorganisms involved and a number factors that can influence the biofilter performance.

The activities of microbes determine the performance of biological filtration. Microbes oxidize organic matters in water to produce energy and thus the available nutrients source in the feed water is essential for their development. In addition, hydraulic loading rate, back washing techniques temperature and pH, can affect the accumulation of biomass onto GAC in the biofilter.

Activated carbon from bagasse prepared by using ZnCl2 as activation at 5000C and 0.5h soaking time was study by [5]

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 32

Page 33: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

In this study, the long term performance of the GAC and AC (bagasse) biofilter was evaluated using synthetic wastewater as the organic source. The biomass growths of GAC were also investigated.

II. EXPERIMENTAL INVESTIGATION

Preparation of Activated Carbon (Bagasse)

The bagasse was obtained from the nearby locality. The

starting material were manually chosen, cleaned, dried and

ground with a roller mill to obtain samples of small particles

size. Chemical activation was carried out using zinc chloride

at various concentrations (3, 6 and 9 %, w/v) and time of activation is 24 hours. After the char was mixed with the

chemical agent, the mixture was refluxed. The samples were

carbonized in a furnace, heated from room temperature to 200-

500 °C for a heat pretreatment and gasification with carbon

dioxide at 600-800 °C for 1 hour. The chars obtained were

cooled at room temperature. The sieved to obtain the chars

particles with size 150-200 μm, which were served for prepare

activated carbon. The activated carbons obtained were

thoroughly washed with 5% HCl and deionized water, dried at

110 °C, cooled at room temperature and stored in desiccators.

The iodine number of the prepared activated carbon was determined according to the ASTM D 4607-94(1999)

specification. After that AC (bagasse) was used to test

efficiency compare with GAC.

The set of experiments are conducted by using different

activated carbon to test the long term efficiency of GAC and

AC( bagasse) by using a synthetic wastewater with the

composition shown in Table 1. This wastewater represents the

biologically treated sewage effluent [6]. The two centimetre

diameter columns were packed with GAC. The GAC used in

the experiments was washed with distilled water and dried in

an oven at 103.50 C for 24 hours. It was kept in desiccators

before packing into the columns. The physical properties of the GAC are shown in Table 2. The filters were backwashed at

40% bed expansion for approximately 3 minutes every 24

hours of filtration run. Total organic carbon (TOC) was

measured on a daily basis using the UV-persulphate TOC

analyzer (Dohrmann, Phoenix 8000). Total adsorbed biomass

was measured as dry mass on a regular basis.

TABLE I. CONSTITUENTS OF THE SYNTHETIC WASTEWATER USED

Compounds Weight (mg/L)

Beef extract

Peptone

Humic acid

Tannic acid Sodium lignin sulfonate

Sodium lauryle sulphate

Acacia gum powder

Arabic acid

(NH4)2SO4

K2HPO4

NH4HCO3

MgSO4.3H2O

1.8

2.7

4.2

4.2 2.4

0.94

4.7

5.0

7.1

7.0

18.8

0.71

TABLE II. PHYSICAL PROPERTIES OF GAC USED

Specification of the GAC Estimated Value

Iodine number, mg /(g.min)

Maximum Ash content

Maximum Moisture content

Bulk density, kg/m3

BET surface area, m2/g

Nominal size, m

Average pore diameter, Å

800

5 %

5 %

748

1112

3 x 10-4

26.14

III. RESULT AND DISCUSSION

The preparation activated carbon methods are discussed in this paper. Effects of different variables on the preparation of activated carbons are also explained. From the literature survey, the chemical recovery values decrease with the carbonization time. This might be due to the evaporation of ZnCl2 from the precursor at longer carbonization time. Therefore, it can be concluded that in chemical activation with ZnCl2, the impregnation method with one hour carbonization produces activated carbons with a well-developed pore structure.

A. Carbonization temperature

The effect of carbonization temperature, gasification and chemical activation on the iodine number is shown in Table 1. There was significant different in iodine number of charcoal with various zinc chloride activation for the carbonization from 500 °C to 800 °C.

TABLE III. EFFECT OF CARBONIZATION TEMPERATURE, GASIFICATION

AND CHEMICAL ACTIVATION ON THE IODINE NUMBER

Temperature ZnCl2

(%w/v)

Iodine

number

500

3

6

9

539

578

468

600

3

6

9

558

610

500

700

3

6

9

550

620

510

800

3

6

9

620

680

630

B. Comparison GAC and other filter media in long term

performance

Biofilter experiments were conducted with different filter

medium GAC ,and AC (bagasse) . GAC, and AC were packed

in 2 cm - diameter columns to a bed depth of 20 cm. All the

columns were operated at the same condition with synthetic

wastewater as nutrient source. After 6 weeks of continuous

operation, all columns reached the steady stage in terms of

organic removal. The results are presented in Fig. 1 in which

GAC was superior to other filter media with higher TOC

removal efficiency. For example, while anthracite, plastic

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 33

Page 34: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

beads and sponge removed only about 20% of TOC, GAC

removed resulted in a constant TOC removal more than 60%.

Figure 1. Comparision of different filter media between GAC and AC

(baggase) in organics removal ( filter bed depth 20cm., average TOC of influent 12mg/L, and flow rate 2 m/h)

C. TOC removal of GAC and AC (baggase)

In this study, the columns with same bed depths of 15 cm. were operated at filtration rates of 2 m/h on TOC removal

efficiency.

Figure 2. The biofilters were acclimatized at 2m/h for 42 days prior to this

experiment; average TOC = 10 mg/L, column diameter = 2 cm, and flow rate 2 m/h)

D. Attached Biomass

The success of operation of a biofilter depends mostly on

the activities of microbial community in the filter. The surface

characteristics of filter media have influence on the attachment

and growth of the biomass and protect biomass due to the shear loss. To evaluate the growth of biomass on the surface of

filter media, many methods have been used depending on the

analysis facilities availability. Reference [7] used

heterotrophic plate count (HPC) to estimate biomass growth in

the biofilter. Phospholipids analysis is widely used in

measuring the growth of biomass in biofilter [8] (. In this

study, the total dry weight of the attached mass was measured.

This method although provides the rough information on

biomass but it is simple and more practical. Fig. 3 shows the

growth of biomass onto GAC. It was quite stable after 4 weeks

of continuous filtration operation.

0

10

20

30

40

50

0 5 10 15 20 25 30 35 40 45 50 55

Time, Day

Dry

bio

mass

, m

g/g

GA

C

Figure 3. Biomass accumulate in GAC

The amount of biomass was the highest (about 44 mg per g

of GAC) after 45 days. The biomass concentration profile with

time depends on both hydraulic and organic loading rates [9]

The higher the loading rate, the greater was the initial biomass

and deeper the penetration into the filter bed. Fig. 4 shows the

biomass accumulate in GAC in different periods.

Figure 4. Biomass accumulate in GAC at 20 days, 40 days and 60 days of

operation time

IV. CONCLUSIONS

Activated Carbon from bagasse waste can be used to

remove organic pollutants from waste water. The performance

of organic pollutants removal are nearly 50% after biofilter

formations. This study shows that bagasse waste can represent GAC with lower cost and high performance. In some countries

such as in Africa which lack of coconut but abundant of sugar

cane. Bagasse waste is an alternative choice to use as source of

activated carbon for alternative purposes such as removing

organic matters from biologically treated sewage effluent. The

advantages of AC biofilter are the consistent of TOC removal

efficiency, long operational life, and simplicity in operation.

The biomass in the AC biofilter remains in a consistent

concentration over a long period that can keep the biofilter

work properly for a long time of operation. The performance

of AC biofilter can be improved by increasing bed depth and

decreasing filtration rate. The AC as filter media was superior to plastic bead, anthracite and sponge in organic removal.

ACKNOWLEDGMENT

Author would like to thank Suansunandha Rajhabhat

University to support this project.

0

10

20

30

40

50

60

70

80

90

0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00

%T

OC

re

mo

va

l

Time (days)

AC (bagasse)

GAC

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 34

Page 35: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

REFERENCES

[1] Eikebrokk, B., E. Gjessing. et al. “Why NOM Removal is Important”.

AWWA/AWQC workshop, Berlin., 2001

[2] Clark, R. M. and B. K. Boutin, Eds. “Controlling Disinfection By-Products and Microbial Contaminant in Drinking Water”. Ohio, U.S

EPA. ,2001

[3] McKay, G., Ed. Use of Adsorbents for the Removal of Pollutants from Wastewater. Boca Raton, CRC Press, 1996

[4] Amy., R. A. M. a. G. L., Ed. Disinfection by-products in water

treatment : the chemistry of their formation and control. Boca Raton, Lewis Publishers, 1996

[5] W.T.Tsai, C.Y. Chang, M.C.Lin et al., Adsorptionof acid dyeonto

activated carbons prepared from agricultural waste bagasse by ZnCl2 activation 45, pp 51-58, 2001

[6] Seo, G. Y., Y. Suzuki, et al. “Biological Powdered Activated Carbon (BPAC) Microfiltration for Wastewater Reclamation and Reuse.

Desalination” 106: 39-45, 1996

[7] Ahmad, R., Amirtharajah, A., AL-Shawwa, A., and Huck, P. M. . “Effects of backwashing on biological filters”.AWWA, 90, 62. 1998

[8] Wang, J. W., Summers, R. C., and Miltner, R. J. “Biofiltration

Performance: Part 1, Relationship to Biomass.” J. AWWA, 87, 55.1995

[9] Chaudhary, D. S., Vigneswaran, S., Ngo, H. H., Shim, W. G., and Moon, H. Granular activated carbon (GAC) biofilter for low strength

wastewater treatment.” Environmental Engineering Research, KSEE, Vol. 8, No.4, pp. 184-192, 2003a

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 35

Page 36: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Session 2: Sustainable Energy Technologies, Carbon and Emission

The integrated permitting system and environmental management: a cross analysis of the landfill sector in Mediterranean regions (Authors: Maria Rosa De Giacomo, Tiberio Daddi)

Carbon Dioxide Mitigation Strategies in Power Generation Sector: Singapore (Authors: Hassan Ali, Steven Weller)

Studies of isothermal swirling flows with different RANS models in unconfined burner (Authors: A.R. Norwazan, M.N. Mohd Jaafar)

Challenging Instruments and Capacities to Engage in Sustainable Development (Author: Carlos Germano Ferreira Costa)

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 36

Page 37: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

The integrated permitting system and environmental

management: a cross analysis of the landfill sector in

Mediterranean regions

Maria Rosa De Giacomo

Institute of Management

Scuola Superiore Sant’Anna

Pisa, Italy

[email protected]

Abstract— Integrated permits address each aspect of a facility’s

operation that has environmental impact. Permitting industrial

facilities is a key tool for regulating environmental pollution in

many nations across the globe. In Europe, the integrated approach

on environmental pollution is based on the Integrated Pollution

Prevention and Control Directive. Now it has been replaced by

Industrial Emissions Directive (IED) n.75/2010.

The implementation of Directive could differ a lot among European

regions due to different Competent Authorities involved. The study

aims to assess how the IPPC Directive impacts the management of

landfill industrial companies. It explores some requirements in-

cluded in the integrated approach aiming. The aim is to investigate

how the implementation of the Directive in six European Regions

could have a different impact on companies of the same sector. Paper results show that some differences in the implementation of

the IPPC Directive are not always justifiable by the flexibility pro-

vided by the legislation.

These differences stimulate the implementation of the Directive in

different ways, therefore causing differences in the consequent pre-

vention of pollution, the key principle of the Directive. More coor-

dination among different authorities could be a solution to over-

come this aspect.

Keywords-pollution prevention; IPPC Directive; integrated

approach to pollution

I. INTRODUCTION

An integrated approach to permitting is more than just a con-solidation or a “stapling together” of single-media permits. Inte-grated permits address each aspect of a facility’s operation that has environmental impact. In almost countries, permitting pro-grams were first designed to separately address specific envi-ronmental elements or specific environmental concerns. Under this type of system, a major facility might be permitted, or oth-erwise regulated, under a variety of different controls – even by different regulators. However, an increasing number of govern-ments, most notably in the European Union (EU), have been transforming their industrial pollution permitting regimes to fos-ter a more integrated approach.

In Europe, this approach is based on the Integrated Pollution Prevention and Control (IPPC) Directive. It was issued for the first time in 1996 [1] and was amended in 2008. Now it has been replaced by Industrial Emissions Directive (IED)

Tiberio Daddi

Institute of Management

Scuola Superiore Sant’Anna Pisa, Italy

[email protected]

n.75/2010. The IPPC Directive represents one of the central policy tools of European Union to regulate industrial activities and to achieve a higher level of protection of the environment as a whole. The Directive asks the Competent Authorities to issue a unique permit for the industrial installations where exist limits, monitoring frequencies and operational requirements with refer-ence to all environmental aspects.

Fundamentally, the IPPC permitting system is a comprehen-sive multi-media, pollution prevention approach to environ-mental protection that also promotes sustainable practices (e.g., consideration of consumption of water and raw materials, energy efficiency). The implementation of the IPPC system is based on a single standard-setting approach and on the evaluation of Best Available Techniques (BAT). In short, BAT is based on the most effective and advanced stage of techniques and their asso-ciated performance ranges. BAT is designed to achieve a high level of protection for the environment as a whole. In order to facilitate the determination of the BAT at each facility, several European Countries rely on a variety of cross-cutting tools that support standard-setting across all environmental media. A key concept of the IPPC Directive is the “flexibility principle.” Through the permit issuance process, an IPPC permit writer fits plant-specific conditions (facility characteristics and local condi-tions) with sector-wide BAT indicated in the Best References Documents (BREF). For example, BAT-based numeric limits (known as Emission Limit Values or ELVs and derived from sector benchmarks) may be adjusted in a permit to reflect local and site-specific conditions. This includes both BAT-based lim-its adjusted to reflect environmental quality standards or local geographic conditions (e.g., depletion of local aquifer) and facil-ity-specific characteristics and conditions (e.g., equipment and technology already in use at the facility). Using this approach, IPPC permitting is able to combine local and facility-specific conditions with sector-wide considerations.

The study is based on a research project financed by the European Commission. In the framework of this innovative European environmental regulation, this paper aims to assess how the IPPC Directive impacts the management of industrial companies. It describes the results of an empirical study carried out in the landfill sector, aiming to point out how the implemen-tation of the Directive in six European Regions could have a different impact on companies of the same sector.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 37

Page 38: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

The following article is structured as follows: in the next sec-tion the existing IPPC based studies are briefly introduced, then we define the research question and the method adopted. In sec-tion 3, we discuss the results of the research before closing the paper with discussion and conclusion sections.

A. Integrated permitting studies in the literature

There are many studies in the literature which evaluate the implementation of environmental policies and assess the effects of environmental regulation on the management of companies. However, few of them are concerned with the Integrated Pollu-tion Prevention and Control (IPPC) Directive [2]. Papers dealing with the IPPC Directive mainly focus on Best Available Tech-niques (BAT) assessment and identification of the environ-mental performances of IPPC installations. Regarding BAT, some authors have proposed methods that could be adopted to identify and assess techniques. [3] describe a method to dissemi-nate BAT in two industrial sectors (dairy and textile) in three Southern Mediterranean Countries: Egypt, Morocco and Tunisia. The paper of [4] presents an approach to the evaluation of BAT. Similar aims were included in the paper [5]. A methodology to define Emission Limit Values associated with Best Available Techniques for wastewater emissions was included in the paper [6]. Also the study [7] includes a methodology that focuses on the evaluation of existing BAT techniques. The methodology is based on case studies in order to consider the local conditions of companies. [8] affirm, through the development of an opera-tional decision tool, that firms should use not a single best avail-able technique, but rather a best combination of available several techniques. The methods and tools to adopt a combination of BAT are also investigated in the paper [9]. Other studies deal with the implementation of BAT in a specific industrial sector included in the scope of the IPPC Directive. The study [10] fo-cuses on the implementation of the IPPC Directive and the Ref-erence Document on Best Available Techniques in a Turkish textile mill. The authors conclude that the implementation of Best Available Techniques is crucial to decreasing the consump-tion of water and energy. [11] assess the implementation of BAT in a seafood facility while [12] analyse BAT in the adhesives application sector. In addition to the studies focused on BAT, there are some papers dealing with the capacity of the IPPC leg-islation to reduce pollution, thereby improving the environ-mental performance of companies. The effectiveness of the IPPC directive in Ireland’s pharmaceutical sector was demon-strated by the study [13]. Authors affirmed that the integrated licensing system made the avoidance of pollution possible in the industrial sector analysed. The study [14] describe the capacity of EMSs to improve performance in some IPPC sectors. The paper [15] considered case studies of British, Finnish and Swed-ish industries and their regulatory bodies. The purpose was to contribute to the discussion on the potential of the IPPC Direc-tive as a driver of eco-efficiency in those firms. The effects of the IPPC Directive on the environmental performance of Finnish pulp and paper mills were investigated by [16]. The authors con-cluded that emissions decreased for some parameters, even if performance of the sector did not show major changes during the

considered period. The study [17] described a model to dis-seminate the CSR among companies located in industrial clus-ters of IPPC sectors, such as tanneries and paper production. In the theoretical framework we can find several studies on BAT and environmental performance linked with the integrated per-mitting system adopted in Europe. Despite this, we can identify a gap in the literature regarding studies that analyse the require-ments included in integrated permits and the impact of these requirements on the management of companies.

The aim of our paper is to fill this literature gap through a cross European regional analysis focused on the landfill sector. Starting from the contents of the permits, we intend to investi-gate not only the main requirements, monitoring frequencies, and emission limit values imposed on these companies, but also whether the application of the IPPC Directive differs in the con-sidered European Regions with regard to the imposition of dif-ferent mandatory requirements. In order to analyse these differ-ences, we assess if they are justifiable by the flexibility principle provided by the IPPC regulation.

II. RESEARCH QUESTION AND METHOD

This study shows the results of the analysis of the content of 61 IPPC permits issued for the landfills receiving more than 10 tonnes per day or with a total capacity exceeding 25 000 tonnes, excluding landfills of inert waste. The permits were issued in six European Regions: Andalusia and Valencia (Spain), Tuscany, Piedmont and Sicily (Italy), and West Macedonia (Greece). The research aimed to identify differences in environmental man-agement requirements, monitoring frequencies and limit emis-sions imposed on installations of the same industrial sector, but with permits issued in the different European Regions investi-gated. Taking into account the membership of all the analysed Regions to the same economic European market, we can expect to find few differences among permits. These differences should be linked with local environmental characteristics, and in any case should not be so relevant as to impact the costs sustained by the analysed companies. So our hypothesis for testing is that, even if the IPPC Directive allows some flexibility to the Mem-ber States (MS) in its implementation, this would not create relevant disparities between the landfills located in the different EU Regions considered in this study.

The method applied in this study is Content Analysis. Some literature studies define the Content Analysis as a systematic, replicable technique for compressing parts of text into fewer content categories based on explicit rules of coding [18, 19, 20, 21].

The permits were collected and analysed during the years 2009 – 2011, as a part of the MED IPPC NET (Network for strengthening and improving the implementation of the Euro-pean IPPC Directive regarding Integrated Pollution Prevention and Control in the Mediterranean) project. MED IPPC NET was a 30 month-project, which was co-financed by the European Commission through the MED Programme. Permits were col-lected by project partners. Competent Authorities issued permits

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 38

Page 39: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

in each of the considered regions were involved. Competent Authorities were also interviewed in order to investigate some aspects related to the issue of IPPC permits. Permits were pro-cured in electronic format. The collected sample is small (61 observation) but this aspect could be justified by the very inten-sive commitment and effort of the partners to collect data. In-deed, even if the article 15 of the IPPC Directive establishes public access to information regarding the implementation of the Directive -also taking into account the indication of the Aarhus convention-, in some cases the project partners found difficulties to have a free access to this kind of data. The main represented regions are Piedmont (34% of the total number of permits) and Tuscany (26%), followed by Andalusia (13%), Valencia (11%), Sicily (10%) and West Macedonia (5%). We collected data con-sidering six European Regions and various Competent Authori-ties responsible for the issuance of integrated permits. The study population consists of 61 IPPC permits.

III. RESULTS

One of the aspects investigated in order to know how the IPPC Directive has been implemented refers to the requirements indicated in permits about Best Available Techniques. The BAT Reference Documents (BREFs) do not require the adoption of Emission Limit Values (ELVs) or specific techniques, but they are taken into account by the Competent Authorities in order to set ELVs. In this sense, the BAT is considered a tool for Compe-tent Authorities to implement the IPPC law. The European Commission does not indicate through the IPPC Directive that these techniques are mandatory for companies in the scope of the IPPC.

Table 1 shows that in some cases, permits related to the land-fill sector include specific requirements about the adoption of BAT. In particular, the majority of permits require the adoption of BAT with a deadline for implementation (36.7% of cases). Another relevant part of permits include a description of BAT but do not include specific prescriptions.

TABLE I. REQUIREMENTS ABOUT BEST AVAILABLE TECHNIQUES INCLUDED IN

PERMITS

Best Available Techniques (BAT)

Landfills

(5.4)

IPPC Permit does not include the adoption of BATs 18.6%

IPPC Permit includes a description of BATs but does

not include specific requirements 30.5%

IPPC Permit states that it has included BATs for envi-

ronmental purposes 13.6%

IPPC Permit includes the adoption of BATs with a

deadline to be implemented 37.3%

The data show that the approach regarding the Best Avail-able Techniques indicated by the European Commission, in most cases was not considered. Table 2 demonstrates that one of the

key tools of the European integrated permitting system (BAT) is not always used in the right manner by the Competent Authori-ties in the issuing of permits.

A. Landfill water emissions requirements

Our study also considers the Emission Limit Values related to

water emissions (Table 2). For water emissions we refer to the

run-off water of the landfill, being the leachate managed as liq-

uid waste by the sample considered. We investigated the limit

applied to the water emissions discharged in surface water. The

data related to the discharged into sewers were not considered

relevant for our aims considering the different limits often im-

posed by the local sewer management organization.

TABLE II. EMISSION LIMIT VALUES FOR WATER EMISSIONS.

Emission Limit Values related to waste water emissions

for Landfills sector

Destination: surface water

Pollutants

(mg/l) COD TSS Sulphate Number of permits

Valencia 125 60 250 1

West

Macedonia 125 25 250 3

Andalusia n.a. n.a. n.a. n.a

Piedmont n.a. n.a. n.a. n.a

Sicily 160 80 1000 6

Tuscany 160 80 1000 4

The most important difference found in the data is related to

Sulphate pollutants. In West Macedonia and Valencia we ob-

serve a stricter limit than in Italy. Similar data was found for the

Chemical Oxygen Demand (COD) and Total Suspended Solids

(TSS), the Italian permits allow the discharge of wastewater with

a higher pollutant load.

For the regions of Andalusia and Piedmont, the data on ELVs for water discharge are not available.

In the table 3 we report the monitoring frequencies imposed in

the permits for all parameters indicated in table 2.

TABLE III. MONITORING FREQUENCIES OF WATER EMISSIONS

Monitoring frequencies of water emissions (with indications of

number of permits)

Landfills (5.4)

Number of permits

Valencia n.a n.a.

West

Macedonia Three-monthly 3

Andalusia n.a. n.a.

Piedmont Yearly 5

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 39

Page 40: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Sicily Three-monthly 6

Tuscany Monthly 1

Three-monthly 5

The most frequent monitoring frequency is quarterly. There are

differences among Regions in establishment of monitoring fre-

quencies: the highest frequency is monthly (Tuscany) while the

more less is yearly (Piedmont). Also within the same region

(Tuscany) the frequency can differs among different permits.

B. Frequencies monitoring of noise emissions

Another important aspect to consider refers to noise emissions.

In this case Emission Limit Values are not set in the national

regulation nor in IPPC permits, but in general they are defined by local planning decided at the municipal level in order to ac-

count for the urbanization of local contexts. Therefore the most

interesting data to analyse is the frequency imposed on the land-

fill to carry out new acoustic impact assessments at the perimeter

of the plant in order to measure the noise emitted by the process.

The frequency of impact assessments correlates to the ordinary

impact assessments. By ordinary we mean that the impact as-

sessment must be carried out even in the absence of relevant

modifications to the installations or to the productive process.

Data referred to permits are included in table 4.

TABLE IV. MONITORING FREQUENCIES OF NOISE EMISSIONS

Monitoring frequencies of noise emissions (sector 5.4 landfills)

Andalusia Frequency not established (100%)

Valencia Five-yearly (100%)

West Macedonia Frequency not established (100%)

Piedmont

More times a year (15.4%)

Yearly (53.8%)

Two-yearly (7.7%)

Three-yearly (23.1%)

Sicily Not available

Tuscany

Yearly (6.3%)

Two-yearly (25%)

Three-yearly (12.5%)

Frequency not established (56.3%)

Table results show that there are large differences about this

aspects among different regions. For example, in Andalusia and

West Macedonia the monitoring frequencies of noise emissions

is, in all cases, not established. Moreover, in Italian regions

(Tuscany and Piedmont) the periodicity vary strongly. For ex-

ample, in Piedmont in the 15.4% of the cases there is a monitor-

ing frequency of more times a year, in more than 53% of the

cases the periodicity is yearly, and in the remaining cases is two-

yearly (7.7%) and three-yearly (23.1%). Similar to Piedmont,

also in Tuscany there is a high variability of the monitoring fre-

quencies of noise emissions for landfill companies. Finally, in

Valencia the monitoring frequency is always every five years.

These data show large disparities not only among different

European regions, but also inside the same region.

Taking into account these results, we can affirm that IPPC Di-

rective impacts the management of industrial companies. Figure 1 shows how IPPC Directive can influences in different

ways company costs and, as a consequence, its competitiveness.

Figure 1. Illustration on how IPPC Directive impacts on management of indus-

trial companies.

IV. DISCUSSION

The European Directives must be implemented by the Member

States. The Maastricht Treaty provides that the Member States

shall choose the methods and actions for the implementation of

the Directives. In addition, the IPPC Directive introduces the

flexibility principle, according to Emission Limit Values and

technical measures shall be based on best available techniques,

with no prescriptions of the use of technique or specific tech-

nology, but considering technical characteristics of the installa-tion, its geographical location and the local environmental. Ac-

cording to the Maastricht Treaty, Competent Authorities have

flexible chances to define the content of IPPC permits, although

within this flexibility, some key elements should be considered:

technical characteristics of the installation, the geographical

location and the environmental conditions.

Despite these opportunities for flexibility, many differences

highlighted in this paper are not justifiable, in the opinion of the

authors. The ELVs of water discharges sometimes vary exces-

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 40

Page 41: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

sively among Competent Authorities of different countries. The

different levels in the ELVs stimulate the adoption of BAT in

different ways, therefore causing differences in the consequent

prevention of pollution, the key principle of the Directive. Also

the monitoring frequencies of emissions, especially in the case

of noise, vary in excessive way not only among different Euro-

pean regions, but also inside the same regions. A monitoring

periodicity that goes from yearly to three-yearly in the same

region cannot be justified by the flexibility principle of the Di-

rective also if we consider that in the same region in many cases

the periodicity of monitoring emissions is not established. Even if landfill companies could differ among them, according to au-

thor’ s opinions differences among companies of the same in-

dustrial sectors are not enough to justify these large disparities

among some requirements imposed by IPPC permit.

These results allow us to conclude that flexibility principle is not

applied correctly. Moreover, some of the Competent Authorities

interviewed affirmed that they do not know very well how to

apply the flexibility made available by the Directive, and this

supports our conclusions.

The current disparities in the implementation of the Directive

can be linked to several reasons, according to the authors. Firstly, the Directive seems to give too much flexibility in the

issuing of the integrated permitting. This results not only from

the flexibility principle, but also from the fact that the indica-

tions of assessment frequencies and the content of permits are

too general. To overcome this problem it would be good allow a

more coordinated implementation of the Directive among the

Member States.

A second cause for these disparities can be found in the weak

coordination between the Member States at the European level,

and in the weak coordination between the different regions at the

national level. At the European level, the European Commission carried out periodic monitoring of the implementation of the

IPPC Directive, asking information from the national authorities

of each Member State. However these reports aimed to monitor

only the administrative issues regarding the implementation of

the IPPC permits (e.g. number of permits issued, number of in-

spections carried out) and not the content of the permits. If such

monitoring were extended also to this topic, the European

Commission would be able to adopt actions to improve the ho-

mogeneity of the permits. In addition, differences observed at

the regional levels suggest that coordination managed by the

national ministries has not been very effective.

Finally, some other reasons for disparities in the permitting sys-tems can be linked to the lack of resources, mainly human re-

sources, in the local Competent Authorities in charge of issuing

the permits. In fact, the establishment of Emission Limit Values,

taking into account the “technical characteristics of the installa-

tion concerned, its geographical location and the local environ-

mental conditions” facilitated increased knowledge and need for

the dedication of more time by the public officers involved in

the issuing of the permits with respect to the ordinary procedures

followed. Competent Authorities are not always able to invest in

the training of their employees and they do not always have time

available to identify different ELVs for each installation. For this

reason, also for IPPC, the ordinary procedure is followed, which

consists of the inclusion of the ELVs identified in the national

laws in the permits.

V. CONCLUSIONS

The results achieved by this paper contribute to the literature

framework on pollution prevention. On the one hand the paper

contributes to bridging the gap in the literature, regarding the

content analysis of IPPC integrated permits. No previous studies on this topic have yet been published, and as mentioned above,

the institutional monitoring performed by the European Com-

mission does not have the same aims either. On the other hand,

this confirms the incomplete implementation of the Directive.

As already observed by some authors regarding the incomplete

implementation of BAT [22, 10, 11, 23] our paper also confirms

that other relevant key principles of the Directive, such as the

flexibility principle, could be better implemented.

The data shown in this paper reveals differences in IPPC per-

mits across European Regions linked with ELVs and monitor-

ing frequencies. Huge differences in the implementation of IPPC requirements

for companies could cause different impacts on them.

In particular, the compliance with legislation requires costs that

companies have to sustain. If requirements to comply with the

Directive vary excessively among companies of the same sec-

tor, they could sustain different costs. This aspect could cause

differently impacts in landfill companies. The monitoring of

emissions entails high costs and often involves external labora-

tories. Low monitoring frequencies mean lower costs for land-

fills, giving companies competitive advantage.

Therefore, taking into account the research question and the results reached, we can suppose that the issuing of integrated

permits by different European Regions can bring about relevant

disparities in the environmental management of companies and

in the compliance costs associated with them. To confirm this

hypothesis, it would be necessary carry out further studies on

similar topics, collecting permits from other Member States and

other European regions or connecting the research with other

industrial sectors in the scope of IPPC.

We invite scholars to further develop this research. Some ideas

to for further analysis are linked with the assessment of the im-

pacts of the disparities pointed out. For example, future research

could assess the economic impact of these differences and their effects on costs and competitiveness. Further research could be

developed which would assess the future implementation of

IED. As mentioned above, the new Directive includes some new

requirements in terms of the identification of ELVs (e.g. art. 15).

Therefore, the effectiveness of these new requirements in the

elimination of disparities will have to be demonstrated.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 41

Page 42: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

REFERENCES

[1] European Commission. (1996) Council Directive 96/61/EC of 24 September 1996 concerning integrated pollution prevention and control, Brussels,

European Union.

[2] T. Daddi, M.R. De Giacomo, F. Testa, F. Iraldo, M. Frey. “The Effects of Integrated Pollution Prevention and Control (IPPC) Regulation on Company

Management and Competitiveness”. Business Strategy and the Environ-ment. In press.

[3] T. Daddi, M.R. De Giacomo, G., Rodríguez Lepe, V.L. Vázquez Calvo, E.

Dils and L. Goovaerts (2012) “A method to implement BAT (Best Available Techniques) in South Mediterranean countries: the experience of

BAT4MED project”, Environmental Economics, Vol. 3 No. 4, pp.65–74.

[4] G. Giner-Santoja, P. Aragonés-Beltrán and J. Niclós-Ferragut (2012) “The application of the analytic network process to the assessment of best avail-

able techniques”, Journal of Cleaner Production, Vol. 25 No. 1, pp.86–95

[5] J. Geldermann, O. Rentz (2004) “The reference installation approach for the techno-economic assessment of emission abatement options and the deter-

mination of BAT according to the IPPC-directive”, Journal of Cleaner Pro-duction, Vol. 12 No. 4, pp.389–402.

[6] C. Polders, L. Van den Abeele, A. Derden and D. Huybrechts, D. (2012)

“Methodology for determining emission levels associated with the best available techniques for industrial waste water”, Journal of Cleaner Produc-

tion, Vol. 29-30, pp.113–121.

[7] A. Cikankowitz and V. Laforest (2012) “Using BAT performance as an

evaluation method of techniques”, Journal of Cleaner Production, doi: 10.1016/j.jclepro.2012.10.005

[8] T. Bréchet and H. Tulkens (2009) “Beyond BAT: selecting optimal combi-

nations of available techniques, with an example from the limestone indus-try”, Journal of Environmental Management, Vol. 90 No. 5, pp.1790–1801.

[9] G. Mavrotas, E. Georgopoulou, S. Mirasgedis, Y. Sarafidis, D. Lalas, V.

Hontou and N. Gakis, N. (2007) “An integrated approach for the selection of Best Available Techniques (BAT) for the industries in the greater Athens

area using multi-objective combinatorial optimization”, Energy Economics, Vol. 29, No. 4, pp.953–973.

[10] A.M. Kocabas, H. Yukseler, F.B. Dilek and U. Yetis (2009) “Adoption of

European Union’s IPPC Directive to a textile mill: analysis of water and en-ergy consumption”, Journal of Environmental Management, Vol. 91 No. 1,

pp.102–113.

[11] M.C. Barros, A. Magán, S. Valiño, M.P. Bello, J.J. Casares and J.M. Blanco (2009) “Identification of Best Available Techniques in the seafood industry:

case study”, Journal of Cleaner Production, Vol. 17 No. 3, pp.391 – 399.

[12] J. Geldermann, N.H. Peters, S. Nunge, O. Rentz (2004) “Best available techniques in the sector of adhesives application”, International Journal of

Adhesion and Adhesives, Vol. 24 No. 1, pp.85–91.

[13] D. Styles, K. O’Brien and M. Jones (2009) “A quantitative integrated as-

sessment of pollution prevention achieved by Integrated Pollution Preven-tion Control licensing”, Environment International, Vol. 35 No. 8, pp.1177–1187.

[14] T. Daddi, M. Magistrelli, M. Frey, F. Iraldo, (2011). “Do Environmental Management Systems improve environmental performance? Empirical evi-

dence from Italian companies”. Environment, Development And Sustain-ability, Vol. 13 No. 5, pp.845–862.

[15] N. Honkasalo, H. Rodhe and C. Dalhammar (2005) “Environmental permit-

ting as a driver for eco-efficiency in the dairy industry: A closer look at the IPPC directive”, Journal of Cleaner Production, Vol. 13 No. 10-11,

pp.1049–1060.

[16] K. Silvo, T. Jouttijärvi and M. Melanen (2009) “Implications of regulation based on the IPPC Directive – A review on the Finnish pulp and paper in-

dustry”, Journal of cleaner production, Vol. 17 No. 8, pp.713–723.

[17] M. Battaglia, L. Bianchi, M. Frey, F. Iraldo (2010) “An innovative model to promote CSR among SMEs operating in industrial clusters: evidence from

an EU project”, Corporate Social Responsibility and Environmental Man-agement, No 17, pp.133–141.

[18] B. Berelson (1952) Content Analysis in Communication Research, Free

Press, New York.

[19] GAO, U.S. General Accounting Office. (1996) Content Analysis: A Meth-odology for Structuring and Analyzing Written Material, Washington, D.C.

[20] K. Krippendorff (1980) Content Analysis: An Introduction to Its Methodol-

ogy, 4th ed., Newbury Park, Sage.

[21] Weber RP. 1990. Basic Content Analysis. 2nd edition (Sage, Newbury Park, CA).

[22] M.C. Barros, P. Bello, E. Roca and J.J. Casares (2007) “Integrated pollution

prevention and control for heavy ceramic industry in Galicia (NW Spain)”, Journal of Hazardous Materials, Vol. 141 No. 3, pp.680–692.

[23] C. Vazquez, G. Rodríguez, T. Daddi, M.R. De Giacomo, C. Polders, E. Dils (in press). “Policy challenges in transferring the integrated pollution preven-

tion and control approach to southern mediterranean countries: a case study”, Journal of Cleaner Production. Available on line at:

http://dx.doi.org/10.1016/j.jclepro.2014.06

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 42

Page 43: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Carbon Dioxide Mitigation Strategies for the

Singapore Power Generation Sector

Hassan Ali and Steven R. Weller

School of Electrical Engineering and Computer Science

The University of Newcastle

NSW 2308, Australia

e-mail: [email protected]

Abstract—This paper examines Singapore’s ongoing efforts and

strategies to mitigate carbon dioxide ( 2CO ) emissions from the

power generation sector to achieve 2020 emission reduction

targets. Our study reveals that there are still important gaps,

especially if the aim is to achieve energy security in a future-

oriented way and generating sustainable, reliable and

competitively-priced electricity. The paper outlines options by

which Singapore can employ energy sources to reduce

2CO emissions from the power generation sector beyond 2020.

Keywords-carbon dioxide; 2CO emissions; greenhouse gas

(GHG); power generation

I. INTRODUCTION

Since the beginning of the Industrial Revolution in c.1750, greenhouse gas (GHG) concentrations in the atmosphere have significantly increased due to the combustion of fossil fuels and contemporaneous, near-exponential growth of the global economy. Greater concentrations of GHGs in the atmosphere enhance the greenhouse effect, trapping more heat within the various components of the Earth system, such as the ocean, atmosphere and cryosphere. The resulting anthropogenic increase in Earth’s temperature is altering cloud cover and wind patterns, ocean currents and distributions of plants, and driving the observed increase in global mean sea level.

According to the 5th Assessment Report of the United

Nations Intergovernmental Panel on Climate Change (IPCC), the global mean sea level is projected to rise by 10–22 inches by 2100 under a low emissions scenario and 20–39 inches under the high emissions scenario [1]. Trends in local weather records are not unique when compared with global observations of climate change. For example in Singapore, the annual mean surface temperature indicates a general warming trend and has risen from 26.8°C in 1948 to 27.6°C in 2011 and the mean sea level has increased by 3 mm per year over the last 15 to 17 years [2]. In recent years, a shift in rainfall has been observed, and rainfall has also been more intense. In 2001, tropical storm Vamei swept just north of Singapore, causing major floods in the region. This change in climate is already impacting on areas such as land loss, coastal erosion, seawater intrusion, flooding, and impacting on public health from resurgence of diseases.

Over the years, Singapore has taken positive steps, policies, and strategies to address the effects of global climate change. In May 1997, Singapore endorsed the United Nations Framework Convention on Climate Change (UNFCC) to support the worldwide efforts and demonstrate its commitment to mitigate and control GHG emissions. Singapore acceded to Kyoto Protocol (KP) of UNFCC in 2006 [3]. As a non-Annex-I Party to UNFCC, Singapore is not subject to binding GHG emission reduction commitments under the Kyoto Protocol.

In 2010, ahead of the UNFCC climate change conference in Copenhagen, Singapore pledged to unilaterally reduce its emissions by 7–11% below Business-as-Usual (BAU) levels in 2020, and in case of a binding international agreement in which all countries implement their commitments in good faith, by 16% below BAU levels in 2020 [2]. Although a legally binding agreement has yet to be reached, Singapore has embarked on mitigation and energy efficiency measures in key sectors, namely power generation, industry, building and households, to achieve 7-11% reduction in 2020 GHG emission reduction targets. The power generation sector only is expected to contribute close to half of the emission reductions

[2] and is therefore the main target for carbon dioxide ( 2CO )

reductions to achieve 2020 targets.

Motivated by above facts, in this paper we provide an overview of Singapore’s efforts and strategies to contain its

2CO emissions from the power generation sector to achieve

2020 emission reduction targets. Meeting the long-term goal of mitigating 2CO emissions will require large, additional

reductions. The paper thus suggests a range of 2CO mitigation

strategies beyond 2020 in the power generation sector with a view to long-term energy security, sustainability and cost competitiveness.

II. CARBON EMISSIONS AND INTENSITY

Singapore’s historical absolute 2CO emissions and 2CO

intensity trends are shown in Fig. 1 [4]. Singapore’s historical rate of emissions growth was about 6.4% per year from 1994 to 2000. From 2000 to 2005, Singapore’s emissions grew by 1.1% per year (from 39 million tonnes in 2000 to 41 million tonnes in 2005), due largely to fuel switching to natural gas in the power generation sector. By 2006, Singapore’s carbon intensity reached 30% below 1990 levels, due to a rapid switch

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 43

Page 44: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

to natural gas for power generation and improvements in energy efficiency. In 2007, the proportion of electricity generated by gas using highly efficient combined cycle turbines in Singapore was already 79%.

Figure 1. Singapore’s absolute 2CO and intensity emissions [4].

Singapore’s absolute carbon emissions in 2005 were 41 million tonnes and projections on 5% annual growth show that it expected to reach 77.2 million tonnes (MT) in 2020 on a BAU scenario [2].

III. POWER GENERATION OUTLOOK

Since 1995, Singapore’s peak electricity demand has increased by almost 42 per cent, rising from 3485 MW in 1995 to 6,041 MW in 2009 [5]. In 2005 demand dropped below the levels of preceding years due to the economic downturn with a sharp rebound in 2010. The electricity peak demand increased by almost 13%, from 5981 MW in January 2010 to 6859MW in July 2014 [6].

Singapore’s monthly installed capacity from Aug 2013 to

July 2014 is shown in Fig. 2. The installed capacity increased

by 16.45% from 10657 MW in Jan 2009 to 12756.2 MW in

July 2014.

Figure 2. Monthly installed capacity [6].

The electricity generation sector produced a total of 4,034.0 ktoe (or 46.9 TWh) of electricity in 2012, 2.0% higher than the 3,954.8 ktoe (or 46.0 TWh) generated in 2011[5]. Singapore’s final generated electricity is projected to increase by 27% from 37 TWh in 2010 to 51 TWh by 2035 [7].

Since 2001, fuel oil has dominated as the primary source of electrical power generation in Singapore, but the use of natural gas for generating electricity increased rapidly from 28% in 2001 to 81% in 2009 [6]. Electricity generated by diesel, syngas and refuse incineration remained at around 4%.

The share of natural gas in Singapore’s fuel mix for electricity generation rose to 84.3% in 2012, higher than the 78.0% registered in 2011 [8]. This also resulted in a fall in petroleum products’ contribution to the fuel mix to 12.3%. Other energy products constituted the remaining 3.4% of fuel consumed.

Figure 3. Electricity generation fuel mix 2001 and 2009 [6].

According to 2013 Energy Market Authority (EMA)

statistics [6], the current share of natural gas in Singapore’s

fuel mix has raised to 91.79% with petroleum products and

other energy products (solar and waste) contributing 4.29%

and 3.9%, respectively.

Figure 4. Electricity generation fuel mix 2009-2013 [6].

Natural gas is imported into Singapore from Malaysia and

Indonesia via four offshore pipelines. Singapore also imports liquefied natural gas (LNG) from various countries across the world to fulfil the rising demand and to diversify its sources of natural gas.

IV. CO2 MITIGATION STRATEGIES

Singapore is pursuing the target of reducing 2CO emissions

by 7-10% by 2020 and has adopted following 2CO mitigation

strategies in the power generation sector.

A. No Dependency on Coal

Due to larger quantities of pollution generated by coal -

fired generation, Singapore did not adopt coal-fired power

generation in its early years of economic development [9].

B. Liberalization of Power Sector Market

Singapore’s electricity market is restructured and liberalized. As a consequence of liberalization measures, Singapore’s retail and wholesale electricity markets are open to competition, with the aim of giving users the ability to choose their power supplier. The country has thus benefited from competitive electricity prices and improved efficiency. The

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 44

Page 45: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

country has avoided subsidising electricity use. By adopting this market-based approach, the government encourages households and businesses to use electricity prudently and thus help reduce 2CO emissions.

C. Increase in Efficiency and Natural Gas in the Fuel

Mix

The competitiveness of combined cycle gas turbine (CCGT) technology has improved rapidly during the last decade. Capital costs have decreased to a level at which few other technologies can compete with fuel efficiencies approaching 50–60%.

In Singapore’s electricity industry, market-based

competition acts as a natural incentive for power generation

companies to be energy efficient. As a result of this, the

proportion of electricity generated by switching from fuel oil

to natural gas using highly efficient combined cycle turbines

in Singapore grew from 19% in 2000 to 79% in 2007 and

gross efficiency of power generation in Singapore increased

from 39% in 2001 to 44% in 2006 [4]. Natural gas produces

the least carbon emissions per unit of electricity generated

amongst fossil fuel-fired power plants. The country has thus

substantially reduced its emissions growth by switching to

natural gas and adopting more efficient CCGT businesses to

use electricity prudently and thus help reduce 2CO emissions.

D. Increasing the Use of Industrial Cogeneration and

Trigeneration

Cogeneration is the generation of heat or process steam and electricity while trigeneration is the generation of electricity, heat and chilled water for air conditioning or refrigeration. Recent years have seen increasing number of these plants in Singapore. The efficiency of a co-generation plant normally varies from 70 to 85%, whereas trigeneration can yield more than 90% energy efficiency. The 2CO emission per unit of

power generated is therefore greatly reduced. In view of the

high energy efficiency and reduced 2CO emissions potential

offered by these technologies, Singapore continues to promote cogeneration and trigeneration at industrial facilities.

E. Solar Photovoltaic (PV) Systems

The adoption of solar PV systems in Singapore has increased sharply over the past few years, with total installed capacity of grid-connected solar PV systems growing from 362 kWp by end 2008 to 9,989 kWp in 2012 [10]. While use of solar is small today, Singapore is planning to increase the adoption of solar power by government agencies to 350 MWp in 2020, about 5% of annual electricity demand.

Singapore is actively investing in R&D and test-bed development to improve the efficiency and lower the price of solar technologies for adoption on a larger scale. To facilitate this, the Economic Development Board (EDB) has launched solar capability building schemes such as the Solar Capability Scheme (SCS) and Clean Energy Research Test bedding (CERT) [11]. Under the SCS, EDB provides funding for new private buildings to install solar technologies, whereas, under

Figure 5. Installed capacity of grid connected solar PV systems [8].

CERT, HDB is conducting solar test beds in 30 HDB precincts

over a five-year period (2008-2015).

F. Waste-to-Energy (WTE)

Waste-to-energy plants produce less carbon dioxide emissions compared with coal-fired steam production. Since 2000, Singapore’s waste-to-energy plants have been contributing about 2–3% of its energy supply. At the moment, there are five waste-to-energy plants. Currently, Singapore recycles 60% of its waste, and the rest is burnt or put into landfill. Singapore aims to recycle 70% of its waste by 2030.

G. Biomass and Biogas

Biomass and biogas are important renewable energy plants. Biomass plants involve a process known as gasification which converts a solid fuel, such as biomass, into a clean gaseous fuel that can burned in a gas engine to generate electricity. A biogas plant can convert animal manure, green plants, waste from agro industry and slaughterhouses into combustible gas which is then used for power generation. At the moment there are 7 biomass and biogas plants in Singapore. The installed capacity of biomass and biogas plants in operation is 1.9 MW and 7 MW, respectively.

V. VULNERABILITIES IN SINGAPORE’S CO2 MITIGATION

STARTEGIES

Although deployment of solar is growing, the 5% share of solar in 2020 target of emission reductions is modest. Despite the fact that Singapore is alternative energy disadvantaged, it has not taken the opportunity to set more ambitious and aggressive renewable energy target.

Under BAU projections for 2020, Singapore’s fuel mix is estimated to be around 70–75% natural gas, with the rest primarily based on fuel oil [2]. The future dominance of gas is clearly reflected by the fact that Singapore has already put in place almost 92% of its fuel mix in the form of natural gas for power generation today. To further allow share of natural gas in the fuel mix for power generation and to ensure resilient and more diverse supply an LNG terminal commenced its operations in 2013 [2]. Nevertheless, there are limits to how much more Singapore can reduce emissions by boosting gas-based power generation and diversifying sources of gas.

Due to rapid economic growth, Asia is anticipated as major destination for LNG imports. The bulk of this growth in gas

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 45

Page 46: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

demand will be led by "Emerging Asian" nations including China and India, which will account for about 35% of incremental global gas demand growth through to 2020 [12]. Competition for imported gas resources is significant and thus there is great uncertainty in future gas price, which is volatile on both daily and multi-year scales due to increased demand. Indonesia is already aiming at diversifying its gas exports to Singapore, to local industries in Java. It is therefore Singapore’s high level of gas dependency leaves it vulnerable to price fluctuations and supply disruptions.

Net emissions from gas-powered generation are

determined not only by emissions arising from combustion,

but also by leaked natural gas and the 2CO emissions in

transporting it as LNG (wherein up to one third of the energy

is consumed in transport). Leakage rates of just 3% for natural

gas will bring it into approximate parity with coal-fired

electricity in terms of global warming effect [13]. It is

therefore Singapore’s great shift to natural gas would not

reduce emissions enough to address anthropogenic climate

change. Although increases in efficiency of energy

conversion, running plants at high capacity factors, making

operational improvements, and equipment up gradation can

help cut further power sector emissions, these reductions

alone may not be sufficient to achieve atmospheric 2CO

stabilization. Since, future demand of electricity is to set to

rise. It is therefore carbon footprint of each kWh of electricity

need to be reduced further at low cost. Efforts should therefore

be directed at achieving greater cuts at a lower cost than

efforts that focus solely on making efficiency improvements

for to get 2CO reduction targets.

VI. OPTIONS OF REDUCING CO2 EMISSIONS BEYOND 2020

Beyond 2020, Singapore will need stronger interventions

in the power generation sector on account of security, cost

effectiveness, and environmental concerns. Towards that end,

Singapore could use the following options to reduce its power

generation sector 2CO emissions.

A. Cap on Natural Gas in the Fuel Mix

Looking forward, Singapore’s energy mix will need to

become increasingly diverse, as no source of power generation

can provide the silver bullet. Increasing reliance on gas has

discouraged the use of carbon-free renewables, nuclear and

coal with carbon capture and storage (CCS). The gas

dominated strategy has locked Singapore in decades of more

gas usage while it has a number of other options available to

achieve 2CO reductions. Singapore must put a cap on gas

share in the fuel mix for power generation in post 2020

scenario to having a balanced energy policy comprising a mix

of nuclear, fossil fuels with CCS and a major roll out of

renewables.

B. Increasing Renewable Energy Share

In order to curb 2CO emissions and cater for peak load

demand, renewable energy holds a great promise and is

expected to increase its share in the Singapore’s fuel mix for

power generation. According to sustainable energy association

of Singapore (SEAS) white paper [14], less than 1 percent of

the state’s electricity came from renewable energy sources,

primarily biomass, biogas and small scale solar, but Singapore

has the opportunity to add medium to large scale coastal and

offshore solar and wind developments to its renewable energy

portfolio including strategies such as financial incentives,

rebates, tax credits and establishment of renewable energy

standards.

C. Solar Energy

Singapore's high average annual solar irradiation of about 1,500 sun hours per year makes solar energy a potential renewable energy option for Singapore. Singapore. Solar power currently provides for less than 0.1% of Singapore's electricity needs and the state has mainly concentrated its efforts on small utility-scale distributed solar energy that can be located on rooftops, lakes or on the ground.

Due to shading and space limitations associated with the inland solar for large scale deployment of solar panels, offshore is likely to play a role in the future energy mix. Densely populated Singapore should therefore start looking seaward for offshore solar power generation on its reclaimed land near coastal bay/areas. Sustained investment in R&D will therefore be necessary into more advanced, less costly and more efficient offshore solar generation technologies.

D. Wind Energy

Singapore has average low wind speeds of 2–3 m/s and wind speeds of up to 8 m/s can be obtained at elevated and strategically located sites [15]. Most commercial wind turbines are not efficient at Singapore’s low wind speeds. It is thus thought that the use of inland wind power for electricity production in Singapore is limited. Nevertheless, it is possible to optimize the turbine and generator for lower wind speed operation and achieve a significantly higher power output than existing commercial turbines at lower wind speeds. For example, Vestas recently unveiled its V 112, a 3 MW low-wind turbine commercial wind turbine with cut-in wind speed of 3 m/s.

The potential growth area for Singapore will be in micro (50 W to 2 kW) and small wind turbines (2–40 kW) for urban environments. These turbines could also have applications along major roadways, on rooftops, in train tunnels, wind from cooling towers, and ventilation systems and communication towers. Small wind machines can be connected to small, localized micro-grid systems and used in conjunction with diesel generating sets and/or solar PV systems on remote islands of Singapore that are not grid connected.

Our analysis of wind speed data for potential windier sites near coastal regions at Changi showed mean annual wind speed of 3.16 m/s at elevation of 5 m [16]. These may exceed 9 m/s at a height of 50 m, quite close to the shore. It thus makes small-scale wind powered electricity generation an attractive option near coastal areas. Studies need to be initiated for windier regions in modest depths and closer to the shore for necessary adaptations of wind turbine technology.

Offshore wind speeds tend to be higher than coastal wind speeds. Though it is more capital intensive than onshore wind

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 46

Page 47: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

but its key attraction is the reduced environmental constraints. Also, offshore winds are also less turbulent than onshore winds, and wind gradient is less. As a result, offshore wind power on medium to large scale can make a real contribution to carbon reduction targets for Singapore and meeting renewable energy targets. Early focus should therefore be on the use of utility scale floating wind farms at higher altitudes for to capture high wind speeds. Precise wind speed measurements are also needed to establish feasibility and generation costs leading to siting of offshore wind farms.

E. Biomass Co-firing

Co-firing biomass with coal has become a recognized

option electric generation markets. At present, biomass co-

firing in modern coal power plants with efficiencies up to 45%

is the most cost-effective biomass use for power generation.

As share of coal is likely to increase in power generation mix,

it is expected that co-firing will replace entirely biomass

fuelled power generation in the near term.

F. Coal Fired Power Generation

Coal is expected to remain relatively cheap and secure option for Singapore because of the much greater extent of geographically widely distributed reserves in Indonesia, Vietnam, Australia, Malaysia, Brunei, and Philippine. The location of these reserves is in areas that are less politically sensitive. Coal combustion technologies are also developing rapidly and modern plants have attained fuel-to-electricity efficiency of 45% [17].

In 2011, EDB called for a tender to conduct a feasibility

study to evaluate the benefits and costs of a coal gasification

plant on Jurong Island, including different scenarios for

providing power and steam, and for having lower carbon

footprint and being carbon-capture ready. To diversify fuel

mix for energy security, and price stability, Singapore’s first

utility plant to burn coal was also officially opened in 2013.

This recent trend towards coal power generation shows that

share of coal in fuel mix will grow significantly beyond 2020

and thus there is an increasing need for new approaches and

new technologies beyond direct combustion. New

technologies like the integrated gasification combined cycle

(IGCC) plants that turn coal into synthetic gas with CCS can

potentially alleviate the problem with coal power generation

reducing the environmental impact of coal. In the post-2020

scenario, gas will likely remain a major source of power

generation, however, IGCC coal projects are anticipated to

move forward with the utilisation of CCS technologies, with

current capture rates of up to 90%.

G. Nuclear Power

Nuclear power is a stable source power; it operates inexpensively and can guarantee large emission cuts as it is virtually carbon-free. Many countries are expanding their fleet of nuclear power plants in response to carbon and security imperatives and there has also been increased regional interest in nuclear energy programmes. China recently approved 28 new nuclear plants. Thailand has plans to build up five nuclear power plants by 2025 and Vietnam has already announced

plans to build two nuclear power plants in the next decade. Malaysia has also interest in going nuclear by 2020 and Indonesia is considering this option. Since Singapore without any indigenous energy resources, nuclear energy is very important for Singapore to maintain its industrial based load for energy security, environmental sustainability, and economic competitiveness. The main barriers to its adoption include long lead times, lack of a trained workforce, and current lack of public support.

In 2012, a nuclear pre-feasibility study conducted by the

Singapore concluded that while current nuclear energy

technologies are not suitable for Singapore yet, it should

continue to take part in global and regional talks on nuclear

safety. In April 2014, Singapore started building up its

expertise and capabilities in nuclear safety, science and

engineering, with a $63 million, five-year research and

education programme. It is thus anticipated that Singapore

likely will consider nuclear energy production as a way to

mitigate 2CO emissions in a cost effective way with energy

security in post 2020 scenario.

H. Carbon Capture and Storage (CCS)

IEA suggests that 160 GW of coal CCS may need to be installed globally by 2030 as part of action to GHG concentrations to 550 ppm- 2CO eq, with a further 190 GW

CCS capacity required if a 450 ppm- 2CO eq target is to be

achieved [18].

CCS is not new, but nor is it yet a fully mature technology. Significant, progressive improvements in 2CO capture

capability and reductions in the in energy penalty of capture can be foreseen for the early 2020s [19]. Most notably, if the costs of carbon sequestration are expected to go down coal can be considered a sustainable energy source by deploying IGCC-CCS plants.

Singapore is very well bestowed with carbon dioxide storage sites near major carbon dioxide sources, the country should therefore direct its efforts towards deployment of 2

nd

generation of CCS ready power plants (with possible 30% reduction of energy loss) for fossil fuelled power generation plants to help mitigate climate change in the post 2020 scenario.

Since the global rollout of proven CCS technologies is not expected to commence until 2020, there is a clear need to verify and validate the performance of capture plants for differing CO2 conditions of coal-fired power stations and natural gas turbines. It is therefore extremely important for Singapore to have early demonstration and deployment of this technology by retrofitting 2CO capture to existing gas fired and

waste-to-energy (WTE) fleet of power plants to verify, qualify, and facilitate CCS prior to its projected global rollout.

To move forward with CCS in Singapore, CCS is one of

the identified five roadmap areas under Singapore government

investment of $100 million into research on efficient energy

solutions. Increased, sustained investment in R&D will also be

necessary into more advanced, less costly second and third

generation capture technologies.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 47

Page 48: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

I. ASEAN Power Grid and Import of Electricity

Singapore aims to connect to energy markets in the region

through initiatives like the ASEAN Power Grid (APG) and

Trans-ASEAN Gas Pipeline (TAGP). To diversify energy

resources Singapore is also planning to import cost-

competitive electricity from neighbouring countries. Indonesia

has already identified Batam, Pemping and Kepala Jeri

islands, for export of coal generated electricity to Singapore.

These options will free up valuable land in Singapore for

inland solar and wind energy sources, and could also allow

Singapore to tap on the significant renewable energy potential,

such as large scale solar, hydroelectric potential and future

nuclear potential in the region.

J. Carbon Tax

To limit carbon emissions a carbon tax is expected to be

adopted in Singapore in the long-term. As a result, coal-fired

generation without CCS will be highly penalized and could

spur greater use of clean coal technologies like CCS and

cleaner energy sources like nuclear, solar and wind. The

carbon tax revenue can be recycled to fund and subsidize

renewables and CCS.

VII. CONCLUSION

A number of 2CO mitigation measures in Singapore’s

electricity generation sector to achieve 2020 emission reduction targets are identified. The focus of Singapore’s power growth has switched from oil to gas in search of low 2CO emissions

with efficiency and low capital cost. Our study shows that 2020 abatement targets are easier to meet due to gas dominated strategy of 2CO reductions and nominal renewable energy

targets in the 2020 emission reduction targets.

The trend towards efficient gas-fired power generation

instead of cheaper coal for reducing 2CO intensity makes the

generating system vulnerable to price volatility. This strengthens the case for a cap on gas-based generation, diversification of the fuel mix for power generation, a carbon tax, and import of electricity in the post-2020 scenario. Under fuel mix diversity, an increasing share of coal-based CCS technology, renewables, and nuclear are suggested as potential options for power generation. A carbon tax on fossil fuel based power generation would help promote diffusion of clean coal CCS technologies and renewable energy.

Import of electricity and the ASEAN grid are promising ways to cut carbon emissions needed to tackle key sources of anthropogenic climate change. Such options could play a significant role in enabling Singapore to develop its own renewable energy power generation capacity. Again, any such imports should be subject to safeguards to maintain reliability and environmental standards.

Other post-2020 2CO reduction strategies with limited

potential will include energy efficiency efforts in WTE, CHP

and CHCP, retrofitting gas power plants for CCS and cofiring

waste. Similarly, sustained investment in R&D will be

necessary into low carbon technologies and renewables to

avoid sub-optimal solutions.

From current nuclear capacity building and R&D

initiatives, we anticipate that in the long-term Singapore will

likely consider nuclear energy production as a way to mitigate

bulk of its 2CO emissions and achieve a long-term cost

effective energy source to cater for its base load.

REFERENCES

[1] IPCC 5th Assessment Report, Climate Change 2013: The Physical Science Basis, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

[2] National Climate Change Strategy (NCCS) 2012, Climate Change & Singapore: Challenges, Opportunities, Partnerships at: http://app.nccs.gov.sg/data/resources/docs/Documents/NCCS-2012.pdf

[3] “Crucial Issues in Climate Change and the Kyoto Protocol: Asia and the World”, K-Lian Koh, L. H. Lye, J. Lin, World Scientific Publishing Company 2009.

[4] Ministry of Environment and Water Resources, Singapore. (2008, March). Singapore’s National Climate Change Strategy (NCCS) at: http://www.elaw.org/system/files/Singapore_Full_Version.pdf

[5] EMA, Statement of Opportunities for Singapore Energy Industry 2010, at:http://www.ema.gov.sg/media/files/publications/soo/EMA_SOO_2010_web-2.pdf

[6] EMA, Operation Statistics: http://www.ema.gov.sg/Statistics.aspx

[7] APEC Energy Demand and Supply Outlook – 5th Edition, at: http://aperc.ieej.or.jp/publications/reports/outlook/5th/volume2/EDSO5_V2_Singapore.pdf (Access date: Dec 4, 2014)

[8] EMA, Energizing Our Nation, Singapore Energy Statistics 2013, at: http://www.ema.gov.sg/media/files/publications/SES%202013.pdf

[9] C. L. Sien, H. Khan, C. L. Ming, " The Coastal Environmental Profile of Singapore ", Intl. Center for Living Aquatic Resources Management (ICLARM) on behalf of the Association of Southeast Asian Nations/United States Coastal Resources Management Project, 1988.

[10] EMA, Energizing Our Nation, Singapore Energy Statistics 2012, at: http://www.ema.gov.sg/media/files/publications/EMA_SES_2012_Final.pdf

[11] “Notable Energy Developments since EWG43 Singapore”, Summary record of 44th APEC Energy Working Group (EWG) Meeting 7 – 8 November 2012, Washington DC, at : http://www.ewg.apec.org/documents/Notable%20Developments%20for%20Singapore%20since%20EWG43.pdf (Access date: Dec 4, 2014)

[12] “Oil and Gas Practice: Partnerships Reshaping Asia’s Gas Industry”, at presentation by McKinsey & Co Inc. at Asia gas partnership summit 2012, New Delhi, India.

[13] "Clean Coal Technologies, Carbon Capture & Sequestration”, World Nuclear Association, at : http://www.world-nuclear.org/info/energy-and-environment/-clean-coal--technologies/ (Access date: Dec 4, 2014)

[14] “A case study for sustainability: Accelerating the adoption of renewable energy in Singapore”, a white paper by the Clean Energy Committee of the Sustaibnable Energy Association of Singapore (SEAS), Jan 2014, at: http://www.seas.org.sg/resources/presentationpapers

[15] S. King, P Wettergren, “Feasibility Study of Renewable Energy in Singapore” Bachelor of Science Thesis, KTH School of Industrial Engineering and Management Energy Technology, Stockholm, 2011.

[16] Viewfinder, Wind and weather statistics Changi, at: http://www.windfinder.com/windstatistics/singapore_changi

[17] World Coal Association (WCA), Improving Efficiencies, at: http://www.worldcoal.org/coal-the-environment/coal-use-the-environment/improving-efficiencies/ (Access date: November 15, 2014)

[18] IEA “World Energy Outlook 2008” IEA Publications, 9, rue de la Fédération, 75739 Paris Cedex 15, at: http://www.worldenergyoutlook.org/media/weowebsite/2008-1994/weo2008.pdf (Access date: November 8, 2014)

[19] “CCS Cost Reduction Task Force Final Report 2013”, London, UK, at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/201021/CCS_Cost_Reduction_Taskforce_-_Final_Report_-_May_2013.pdf (Access date: November 10, 2014)

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 48

Page 49: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Studies of isothermal swirling flows with different

RANS models in unconfined burner

Norwazan, A.R.

Faculty of Engineering

Universiti Pertahanan Nasional Malaysia, Kem Sg. Besi, 57000, Kuala Lumpur, Malaysia

[email protected]

Mohd. Jaafar, M.N.

Faculty of Mechanical Engineering

Universiti Teknologi Malaysia,

81310, UTM Skudai, Johor Bahru, Malaysia.

[email protected]

Abstract— Numerical analysis of computational fluid dynamics

(CFD) is one of technologies that can be minimized the

experimental cost. This paper is presents the isothermal swirling

turbulent flows analyses in a combustion chamber of an

unconfined burner. There are many types CFD models that can

be used in order to simulate isothermal flows with the Reynolds–

Averaged Navier–Stokes (RANS) models involving the two

equations of calculation. The RANS of various models, including

standard k-ε, RNG k-ε and realizable k-ε turbulence approach

method was applied. These analyses were studied to define the

effect of flow axial velocity and tangential velocity that

particularly to obtain the center recirculation zone (CRZ). The

swirler is used in the burner that significantly influences the flow

pattern inside the combustion chamber. The inlet velocity, U0 is

30 m/s entering into the burner through the axial swirler with SN

= 0.895 that represented a high Reynolds number, Re. These

studies also performed using ANSYS Fluent to evaluate the effect

of difference’s RANS models. Transverse flow field methods have

been used to determine the both velocity’s behaviour behind the

axial swirler downstream. These results of axial and tangential

velocity were normalized with the U0. The velocity profiles’

behaviours are obviously changed after entering the swirler and

slightly different pattern of each model. However, their flow

patterns are similar for all RANS models plane towards the outlet of a burner.

Keywords-swirling flow; axial velocity; tangential velocity;

RANS model; burner

I. INTRODUCTION

After Jones and Launder [1] introduced the k-ε turbulence model for prediction of turbulent swirling flow, there are many studies on turbulence swirling flow in gas turbines combustor [2]. However, the characteristics of swirling flows are not limited on gas turbine but there are included inside the burner, furnaces, cyclone combustor and utility boilers [3-5]. Consequently, the swirler is attached inside the combustor to merge and ensure the two streams of air and fuels have a good mixing. Swirlers are used as flame holders to control the mixture speed depending on the flame speed [6]. In addition, generating a swirling flow inside burner enhances the mixing of the different constituents of the mixture permitting, thus, a better control over the combustion process in terms of flame quality and pollutant’s emission. This is because the appropriate of swirl generates additional turbulence in the shear layer between the direction flow and reverse flow and promotes

to stabilize the flame [3]. The swirler also employed to bringing back the hot species in swirling flows to the combustion zone as well as lowering the possibility of flame blow-off [7]. Therefore, the flame structure and stability in combustion are extremely depends on the aerodynamics and mixing characteristics of fuel and oxidizer in their mixing region [8-11].

The prediction of the swirling flow characteristics in the

combustor can be done using numerical simulation in order to

optimize the design. A numerical study through the application

computational fluid dynamics (CFD) is a great potential in

order to investigate an isothermal and combustion process.

These computational methods of solving the differential

equations of fluid dynamics are well advanced [12]. Turbulent behavior of inertial systems at every time in the space

continuum seems to appear similar characteristics such as

vortex structures and structural inhomogeneities [13].

Turbulent is the state of fluid processing a non-regular or

irregular motion such that velocity at any point may vary both

in magnitude and direction with time. Turbulent motion is

accompanied by the formulation of eddies and the rapid

interchange of momentum in the fluid. Turbulence sets up

greater stresses throughout the fluid and causes more

irreversibility and losses.

Shamami and Birouk [7] concluded that the standard k-ε model predicts well the size of CRZ for low swirl number flow

when compared it with the experimental data. It also found

that all Reynolds-Averaged Navier Stokes (RANS) models can

predict the CTRZ for the strongly swirling flow. German and

Mahmud [14] reported that the standard k-ε model was still

reasonable to predict the overall flow. However, the

predictions from this model inaccurately show general trend of

the tangential velocity distribution to assume a forced-vortex

profile. Ohtsuka [15] studied the high swirling air jet flow and

found that disagreements between computational and

experimental still occur near the centerline of flow. The axial and tangential velocities are smaller than experimental results.

This study claimed that the axial velocity reduced near

centerline at X/d =10 is related to reduction of the tangential

velocity.

Zhuowei et.al [16] have been studied the isothermal flow

of low and high SN using RANS and LES model. It found that

the LES model shows improvement results over RANS model.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 49

Page 50: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

However, the RANS models still significant to use in swirling

flow studies. Mathur and Maccallum [17] were studied from

various angles of axial swirler. The flows showed that axial

swirler of 60° vane angles has greater CTRZ not only

extended upstream to the hub of swirler but slightly blocked

the annular flow area at the exit swirler. Thus, the axial swirler of 45° vane angle showed CTRZ was firmly established. An

isothermal flow in combustor performances is influences the

inlet boundary condition and parameters [18-20]. The swirl intensity is generally characterized by the swirl

number, defined from the ratio of the axial flux of azimuthal momentum to the axial flux of axial momentum [21-23]. The swirl number is a measure on the strength of the swirling flow [24] as main parameters used to characterize swirling flow [25]. Generally, the swirl number above 0.6 is indicated a strongly swirling flow [26]. The swirl number is defined as:

(3)

where,

(4)

and

(5)

where U, W and ρ are the axial velocity, tangential velocity

and density respectively. For axial swirler, the swirl number is

related to the swirl angle, θ, inner ri and outer radius r0 as

given by [27] which the swirl number was proportional to tan

ϕ [28]:

(6)

Steady-state, incompressible, turbulent flows are governed by

the Reynolds-averaged continuity and Navier-Stokes

equations. The conservation form of these equations can be

written as

Continuity:

(7)

Axial Momentum:

(8)

Standard k-ε model the turbulent viscosity is computed by

the combination of the turbulence kinetic energy, k and its

dissipation rate, ε as follows

(9)

The two differential equations are used to describe the

turbulence kinetic energy, k and dissipation rate of turbulence, ε in Eqs. (10) and (11), respectively [29-32]

(10)

and

(11) (1

P represents the production of turbulence kinetic energy. The

model constants are σk = 1.0, σε = 1.3, C1ε = 1.44, C2ε = 1.92

and Cμ = 0.09.

The RNG k-ε model was developed by [33] using re-normalization group theory techniques with scale expansions

for the Reynolds stress is an improved version of a standard k-

ε model. The k-ε equations have the same form in (10) and

(11) but the values of model constants are different. The both

equations are become

(12)

and

(13)

(13)

P represents the production of turbulence kinetic energy. The

quantities and are inverse effective Prandtl number of k

and ε respectively [34]. The model constants are σk = 1.39, σε =

1.39, C1ε = 1.42, C2ε = 1.68 and Cμ = 0.0845.

Realizable k-ε model was formulated that the calculated

normal Reynolds stresses are positive definite and shear

Reynolds stresses satisfy the Schwarz inequality [35]. The

form of the turbulence kinetic energy, k is same as in equation

(10) but the modification is replacing the constant Cμ to calculate the eddy viscosity in equation (9) by a function of its

dissipation rate, ε as follows [36].

(14)

and

(15) (15)

Gk represents the production of turbulence kinetic energy. The

model constants are σk = 1.0, σε = 1.2, and C2 = 1.9 have been

established to ensure that the model performs well. Shih et.al.

[34] claimed that the realizable k-ε model yields better results

for rotating shear flows compared to standard k-ε model.

This present paper was investigated the swirling flow to

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 50

Page 51: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

define isothermal characteristics and applicability the center

recirculation zone in order to get better mixing for combustion

process. The axial swirler is adopted inside a burner and varied

with different RANS models. The studies are presented using

transverse flow field in different radial distance after airflow

entering swirler downstream towards the end of burner outlet.

II. METHODOLOGY

The Computational Fluid Dynamics (CFD) computer code can

be used as a numerical analysis to solve the governing

equations. In order to simulate using CFD, several steps were

taken such as to construct a high-quality mesh, to choose high-

order discretization schemes and robust equation solvers and

to ensure adequate convergence. Initially, the Computer-Aided

Design (CAD) modeling was created using AutoCAD 2012

software in three-dimensional (3D) according to the actual

laboratory scale of a liquid fuel burner. In the present study,

the numerical simulation of isothermal swirling flow, issuing

from the inlet of a burner is using several RANS models known as standard k-ε, realizable k-ε and RNG k-ε turbulence

model is considered. The 3D CAD modeling was assembled

and exported to produce meshing and set up the boundary

conditions. The meshing was composed primarily of

tetrahedral mesh elements included hexahedral, pyramidal and

wedge with various sizes. 3D RANS computation’s modeling

of the entire section, including swirl generation system and

burner had been performed using commercial CFD-software

namely ANSYS Fluent. In Fluent computer code uses a finite-

volume procedure to solve the RANS equations of fluid flow

in primitive variables such as axial velocity, tangential velocity, turbulent intensity and turbulent kinetic energy.

The working fluid is specified as isothermal and

incompressible for non-premixed modeling. A 3D

computational grid of 1.5 million cells was employed in the

RANS model approach in order to simulate the isothermal

flow in an unconfined burner. The Second-Order Upwind

Scheme was set for spatial resolution. The RANS models were

discretized using Quadratic Upstream Interpolation for

Convection Kinetics (QUICK) and the Semi-Implicit Method

for Pressure–Linked Equation (SIMPLE) algorithm was used.

Zero gradient boundary conditions are applied to the outlet

with a constant pressure of 105 N/m2. A swirler and nozzle present as an air inlet and fuel inlet is

located in front of the burner. Both inlets are generated with

the element mesh were built in fine grids as shown in Fig 1.

The burner with axial swirler has an outer diameter of 280 mm

and the swirler diameter is 73 mm. The length of a burner is

1000 mm. The swirler adopted into the burner with swirl vane

angle of 50 degree. The inlet condition for the velocity was set

as velocity flow rate with U0 is 30 m/s for all different RANS

models to define their characteristics and the center

recirculation zone. The transverse flow field data were

measured at various locations of burner length at z-planes. The inflow k was estimate assuming 3% turbulence intensity [37].

The solution convergence is assumed when all the residual

parameters fall below 10-5.

Figure 1 The mesh modeling of unconfined burner

included the axial swirler

III. RESULTS AND DISCUSSION

The velocity magnitude along the combustor is measured using

the experimental and numerical method. In Fig 2 shows the good agreement results between both methods in axial direction along

the combustion chamber of the burner. The experiment data is

measured in axial distance at 10 different points of location at the

center core of flow. This comparison data was used the same

parameter that investigated in same U0 at 30 m/s and 50 degree

of swirl vane angles were used.

Figure 2 Velocity magnitude profiles along combustor

Figure 3 Axial velocity profiles along combustor

In Fig 3 shown the comparison of axial velocity for three different RANS models in centerline axis along the combustor.

There are presents that the airflow were decreased and nearly

zero towards downstream. The peak of axial velocity for

0

2

4

6

8

10

12

14

16

0 0.2 0.4 0.6 0.8 1

Velo

cit

y m

ag

nit

ud

e,

V

(m/s

)

Combustor length, L (m)

CFD

-15

-10

-5

0

5

10

0.0 0.2 0.4 0.6 0.8 1.0

Axia

l v

elo

cit

y,

U (

m/s

)

Combustor length, L (m)

Realizab

le ke

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 51

Page 52: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

standard k-ε and RNG k-ε models were in negative direction of

the inlet zone. This velocity strongly in negative direction

presents signifying the strength of CTRZ. The RNG k-ε model

shows that the negative velocity peaks at inlet zone up to

10.82 m/s while the standard k-ε model is 9 m/s. In case of

realizable k-ε model, neat to inlet zone the peak of axial velocity is in positive direction. This axial velocity flow is

decreasing until negative direction up to x = 0.12 m when the

swirling flow generates the CTRZ. This condition is defined as

the extent of the CTRZ. The standard k-ε model predicts the

smaller CTRZ volume compared to RNG k-ε model provides

slightly improved prediction of CTRZ. AS mentioned by Xia

[38] RNG k-ε model predicts better than standard k-ε model

compared to standard k-ε model but it is costly. In Fig 4, the

contours of each case of axial velocity were obtained. The

realizable k-ε model is presents the small of CTRZ compared

the standard k-ε and RNG k-ε models. The RNG k-ε model

presents the longest and wide flow pattern but the existence of CRZ is slightly smaller than standard k-ε model. All the flows

expand radially near to the wall and flows became wide.

Figure 4 Axial velocity contours along combustor

A. Axial Velocity Profiles

Generally, the zone behind swirler placed in strong axial

stream and their formation of a downstream known as

recirculation zone [36] depends on the SN and U0. Principle

importance of axial velocity profiles is illustrated by jet

boundary, degree of expansion and region of the high-velocity

gradient [27]. It is also defined the boundaries of the forward-

and reverse-flow zones. These reverse flows of axial velocity are appeared in negative mean axial velocity along the

centerline and the walls indicating an existence of the CTRZ

and CRZ [39,40].

The axial velocity profiles of each case of RANS models

are compared in Fig 5 shows in radial distance from x/D = 0.1

until x/D = 1.0. These three RANS models are presented in

different pattern of axial velocity towards downstream. At

centerline of the center core region, the realizable k-ε model

had positive value of axial velocity exactly as near the inlet

zone. At this plane of x/D = 0.1, it means the CTRZ

development not exist yet. But both standard k-ε and RNG k-ε

models, the CTRZ were produced since the flow entering the

inlet zone. These two profiles were moved slowly into positive

direction as realizable k-ε model. In x/D = 0.2 plane, all the

axial velocity profiles in the center core region are appeared in

negative direction, which indicates the existence of the CTRZ.

The RNG k-ε model is changed fast to move forward to positive direction compared with standard k-ε model.

However, the realizable k-ε model has a higher negative axial

velocity flow compared to RNG k-ε model at x/D =0.2 plane.

As flow moves toward a burner outlet, the flow patterns in

axial velocity are similar. In all cases at the different planes, it

showed that the flow peaks near to the walls are getting higher

in radial direction. As flow moves towards downstream, the

axial velocity profiles are changed until x/D = 1.0, the peak

velocities expand radial in a similar pattern in all cases. But

after x/D = 0.8, it shows the axial velocity for all cases

becomes similar trend and started to be flatten towards a

burner outlet. Merkle et.al. [41] had been studied experimentally the

effect of co-swirl and counter-swirl airflows obtained that the

isothermal of normalized axial velocity had a peak value in

positive direction which is this value decreasing until x/R0 = 2

near to the burner outlet. Marzouk et.al. [42] have been carried

out the three different k-ε turbulence models for co-axial air

flow with a particle-laden in annular jet. It found that in early

stage, the standard k-ε model achieved the best overall

performances of velocities. Thus, RNG k-ε model predicted

extra recirculation zones as seen in Fig 4 in this case study.

However, the realizable k-ε model is the most computationally expensive model and it was unable satisfactorily predict the

radial velocity. Xia et.al. [43] obtained that the outlet

boundary conditions have some influence near to the outlet but

nearly no effect further upstream for an isothermal swirling

flow. Eiamsa-ard et.al. [44] indicates that the standard k-ε

model predicts fast decay of axial velocity of centerline

compared with algebraic RSM model. The tangential velocity

profiles also show that the standard k-ε model leads to a rapid

decay of the profiles to a solid-body rotation. The studies

concluded that the inlet conditions for k and ε played a crucial

role in achieving the accuracy of turbulent predictions.

B. Tangential Velocity Profiles

Fig 6 is illustrated the tangential velocity profiles for each case

of RANS models. As seen in x/D = 0.1, the highest peak of

tangential velocity occurred near to r/D = 0.25 given by

standard k-ε model. Hagiwara et.al. [44] reported the tangential velocity had a similar pattern with standard k-ε and

RNG k-ε model in this study. In contrast, the realizable k-ε

model is slightly flattened in this range. In early stage near to

the inlet, the tangential velocity was moved forward in axial

direction is higher than others, but they expand shortly in

radial distance. Obviously, the maximum peak of the

tangential velocity of each case is changeable from the center

core to the wall region and getting highest after x/D = 0.6. In

this isothermal case, the tangential velocities in Fig 6 are

shown that the velocities diminish as the flow expands along

the wall for each plane. These flow fields in tangential velocity

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 52

Page 53: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

are well predicted at the near the inlet of axial swirler in the

center region. However, the flow patterns are similar and

behave in same condition towards downstream as shown in

x/D = 0.6 until x/D = 1.0. However, the tangential velocity of

RNG k-ε model started to increase at x/D = 1.0 near to the

outlet. Raj [45] found the numerical results and the experimental

results have a good agreement for 15˚, 30˚, 45˚ and 60˚ angles

and the trend of swirling flow are well predicted. For higher

SN, the maximum tangential velocity values are far from the

center flow into the core region. As expected, the tangential

velocity values are succeeded in maximum at the x/D plane at

nearly to the swirler and start decreasing towards downstream.

Observation from Zhuowei [16], it found that the tangential

velocity value exhibits the forced-vortex characterized by

increased the tangential velocity in the central region while the

free-vortex characterized by decreased the tangential velocity

when approaching the wall. The similar finding in this result, the tangential velocity presented that is increasing and moves

forward to the walls in each case. Shamami and Birouk [7]

studied the high swirling flow (S=0.81) of standard, RNG,

realizable k-ε model and SST k-ω model which is had similar

results in tangential velocity. It is well predicted with the

experimental results. The maximum tangential velocity is in

halfway to radial distance at near to the inlet. The standard k-ε

model predicts properly than others whereas the stress models

exhibit more accurate.

Figure 5 Axial velocity profiles

Figure 6 Tangential velocity profiles

IV. CONCLUSIONS

Based on overall performances of the RANS models, it

shows that the standard k-ε model gives more appropriate

results. Since the CTRZ is presents in a good and reasonable

shape, which is wide and shorter than others. However, at the

x/D = 1.0 plane, the standard k-ε model presents in similar

behavior with realizable k-ε model towards downstream, this

model is more economically and saved time consuming.

ACKNOWLEDGMENT

The authors would like to thankful to Ministry of Science,

Technology and Innovation (MOSTI) for their funding under

Sciencefund Grant 4S046 (Dr. Mohammad Nazri Mohd Jaafar,

Project Leader) as well as UTM for their supports.

REFERENCES

[1] W.P. Jones, and B.E. Launder, “The prediction of laminarisation with a two-equation model of turbulence”, International Journal of Heat and

Mass Transfer, Vol. 15, pp. 301-14, (1972).

[2] A.C. Benim, M.P. Escudier, A. Nahavandi, A.K. Nickson, K.J. Syed and F. Joos, “Experimental and numerical investigation of isothermal flow in

an idealized swirl combustor,” International Journal of Numerical Methods for Heat & Fluid Flow, Vol. 20 No.3, pp. 348-370, (2010).

[3] S. Eiamsa-ard, and P. Promvonge, “A Numerical Study of Gas Jets in

Confined Swirling Air Flow”, Chiang Mai Journal Science, 33(3): 255-270, (2006).

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 53

Page 54: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

[4] A. Ridluan, S. Eiamsa-ard, and P. Promvonge, “Numerical simulation of

3D turbulent isothermal flow in a vortex combustor”, International Communication in Heat and Mass Transfer 34 (2007) 860-869.

[5] I.V. Litvinov, S.I. Shtork, P.A. Kuibin, S.V. Alekseenko, and K. Hanjalic, “Experimental study and analytical reconstruction of

processing vortex in a tangential swirler”, International Journal of Heat and Fluid Flow (2013) 251–264.

[6] M.N. Mohd Jaafar, K. Jusoff, M.S. Osman, and M.S.A. Ishak,

“Combustor Aerodynamics Using Radial Swirler”, International Journal of Physics Sciences, Vol. 6, No. 13, pp 3091-3098, (2011).

[7] K.K. Shamami, and M. Birouk, “Assessment of the Performances of

RANS Models for Simulating Swirling Flows in a Can Combustor”, The Open Aerospace Engineering Journal, 1, 8-27 (2008)

[8] N. Syred, and J.M. Beer, “Combustion in Swirling Flows: A Review”,

Combustion and Flame 23, 143-201 (1974).

[9] D.G. Sloan, P.J. Smith, and L.D. Smoot, “Modeling of swirl in turbulent flow systems”, Prog. Energy Combustion Science, 1986, Vol. 12, pp

163-250.

[10] I.V. Litvinov, S.I. Shtork, P.A. Kuibin, S.V. Alekseenko, and K. Hanjalic, “Experimental study and analytical reconstruction of

processing vortex in a tangential swirler”, International Journal of Heat and Fluid Flow (2013) 251–264.

[11] Y.A. Eldrainy, M.N. Mohd Jaafar, and T. Mat Lazim, “Numerical investigation of the flow inside primary zone of tubular combustor

model”, Jurnal Mekanikal, December 2008, No. 26, 162-176.

[12] B.F. Magnussen, and B.H. Hjertager, “On mathematical modeling of turbulent combustion with special emphasis on soot formation and

combustion”, 16th Symposium (International) on Combustion, Combustion Institute, pp. 719-729, 1976.

[13] D.T. Yegian, and R.K. Cheng, “Development of a vane-swirler for use in

a low NOx weak-swirl burner”, Combust. Sci. Technol. 139 (1–6) (1998) 207–227.

[14] A.E. German and T. Mahmud, “Modelling of non-premixed swirl burner

flows using a Reynolds-stress turbulence closure”, Fuel 84 (2005) 583-594.

[15] M. Ohtsuka, “Numerical analysis of swirling non-reacting and reacting

flows by Reynolds stress differential method”, Int. J. Heat Mass Transfer, Vol. 38, No. 2, pp. 331-337 (1995).

[16] L. Zhuowei, N. Kharoua, H. Redjem, and L. Khezzar, “RANS and LES

simulation of a swirling flow in a combustion chamber with different swirl intensities”, Proceedings ICHMT International Symposium on

Advances in Computational Heat Transfer (2012).

[17] M. L. Mathur, and N. R. L. MacCallum, “Swirling Air Jets Issuing from

Vane Swirlers. Part 1: Free Jets”, Journal of the Institute of Fuel, Vol. 40, 214 – 22, (1967).

[18] M. Dong & D.G. Lilley, “Effects of Inlet Flow Parameters on Confined

Turbulent Swirling Flow”, ASME International Computers in Engineering Conference, San Diego, ASME Press, New York, NY, Vol.

1, (1993).

[19] Y. Ikeda, Y. Yanagisawa, S. Hosokawa, and T. Nakajima, “Influence of inlet conditions on the flow in a modal gas turbine combustor”,

Experimental Thermal and Fluid Science, 1992, 5, 390.

[20] G.J. Sturgess, S.A. Syed and K.R. McMaus, “Importance of inlet boundary conditions for numerical simulation of combustor flows”,

AAIA-83-1263, 1983.

[21] B.E. Launder, and D.B. Spalding, “The numerical computation of turbulent flows”, Computer Methods in Applied Mechanics and

Engineering 3 (1974) 269-289.

[22] R. Palm, S. Grundmann, M. Weismuller, S. Saric, S. Jakirlic, and C. Tropea, Experimental characteristization and modeling of inflow

conditions for a gas turbine swirl combustor, International Journal of Heat and Fluid Flow 01/2006.

[23] Y.A. Eldrainy, K.M. Saqr, H.S. Aly, and M.N. Mohd Jaafar, “CFD insight of the flow dynamics in a novel swirler for gas turbine

combustors”, International Communications in Heat and Mass Transfer 36 (2009) 936–941.

[24] A.E.E. Khalil, and A.K. Gupta, “Distributed swirl combustion for gas

turbine application”, Applied Energy 88 (2011) 4898–4907.

[25] R.C. Orbay, K.J. Nogenmyr, J. Klingmann, and X.S. Bai, “Swirling

turbulent flows in a combustion chamber with and without heat release”, Fuel 104 (2013) 133-146.

[26] A. Pollard, H.L.M. Ozem, and E.W. Grandmaison, “Turbulent, swirling

flow over an asymmetric constant radius surface”, Experimental Thermal and Fluid Science 29 (2005) 493-509.

[27] S.A. Beltagui, A.M.A. Kenbar, and N.R.L. Maccallum, “Comparison of

Measured Isothermal and Combusting Confined Swirling Flows: Peripheral Fuel Injection”, Experimental Thermal and Fluid Science

1993; 6:147-156.

[28] J.F. Bourgouin, J. Moeck, D. Durox, T. Schuller, and S. Candel, “Sensitivity of swirling flows to small changes in the swirler geometry”,

C. R. Mecanique 341 (2013) 211–219.

[29] A. Datta, and S.K. Som, “ Combustion and emission characteristics in a gas turbine combustor at different pressure and swirl conditions”,

Applied Thermal Engineering 19 (1999) 949-967.

[30] A. Ridluan, S. Eiamsa-ard, and P. Promvonge, “Numerical simulation of 3D turbulent isothermal flow in a vortex combustor”, International

Communication in Heat and Mass Transfer 34 (2007) 860-869.

[31] B.E. Launder, and D.B. Spalding, “The numerical computation of

turbulent flows”, Computer Methods in Applied Mechanics and Engineering 3 (1974) 269-289.

[32] J.L. Xia, G. Yadigaroglu, Y.S. Liu, J. Schmidli, and B.L. Smith,

“Numerical and experimental study of swirling flow in a model”, Int. Journal Heat Mass Transfer, Vol. 41, No. 11 pp 1485-1497, (1998).

[33] V. Yakhot, and S.A. Orszag, “Renormalization Group Analysis of

Turbulence: Basic Theory”, J. Sci. Comput., V1, N1, pp. 3-51, 3/86.

[34] T.H. Shih, W.W. Liau, A. Shabbir, and J. Zhu, “A New Eddy Viscosity Model for High Reynolds Number Turbulent Flows-Model Development

and Validation”, Computational Fluids, V24, N3, pp. 227-238, 3/95.

[35] S. Murphy, R. Delfos, , M.J.B.M. Pourquie, Z. Olujic, , Jansens, P.J., and F.T.M. Nieuwstadt, “Prediction of strongly swirling flow within an

axial hydrocyclone using two commercial CFD codes”, Chemical Engineering Science 62 (2007) 1619-1635.

[36] C.Sicot, P.Devivanta, S. Loyera, J.Hureana, “Rotational and turbulence

effects on a wind turbine blade investigation of the stall mechanism”, J Wind Eng Ind. Aerod, V96, N8-8, pp. 1320-1331, 8-9/08.

[37] J.Xia, “Numerical and experimental study of swirling flow in a model combustor” (1998).

[38] D.G. Lilley, “Prediction of Inert Turbulent Swirl Flows”, AIAA Journal,

Vol. 11, No. 7 (1973), pp. 955-960.

[39] B. Rohani, and K.M. Saqr, “Effects of hydrogen addition on the structure and pollutant emissions of a turbulent unconfirned swirling

flame”, International Communication in Heat and Mass Transfer 39 (2012) 681-688.

[40] K. Merkle, H. Haessler, H.Buchner and N. Zarzalis, “Effect of co- and

counter-swirl on the isothermal flow- and mixture-field of an airblast atomizer nozzle”, International Journal of Heat and Fluid Flow 24

(2003)529-537.

[41] O.A. Marzouk and E.D. Huckaby, “Simulation of a swirling gas-particle flow using different k-epsilon models and particle-parcel relationships”,

Engineering Letter, 18:1. (2010).

[42] J.L. Xia, B.L. Smith, A.C. Benim, J.Schmidli and G. Yadigaroglu, “Effect of inlet and outlet boundary conditions on swirling flows”,

Computers & Fluids Vol. 26, No. 8, pp. 881-823, (1997).

[43] S. Eiamsa-ard and P. Promvonge, “A Numerical Study of Gas Jets in

Confined Swirling Air Flow”, Chiang Mai J. Sci. 2006; 33(3): 255 – 270.

[44] A. Hagiwara, S. Borz, and R. Weber, “Theoretical and experimental

studies on isothermal expanding swirling flows with application to swirl burner design”, Technical report doc F 259/a/3, Internal Flame Research

Foundation, 1986.

[45] R.T.K. Raj, and V. Ganesan, “Study on the effect of various parameters on flow development behind vane swirlers”. International Journal of

Thermal Sciences 47 (2008) 1204-1225.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 54

Page 55: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Challenging Instruments and Capacities to Engage in

Sustainable Development

Carlos Germano Ferreira Costa

Universidade Federal do Ceará

DDMA/PRODEMA/UFC

UNDP Consultant

Fortaleza, Brazil

[email protected]

Abstract— Anthropogenic emissions of GHG need to fall to zero

by 2100 and be stabilized at 550 ppm by 2030 to prevent the

global mean temperature rising from current levels to over 3oC

by the end of this century. This will require collaborative action

among developed, developing and emerging markets to reduce

annual global emissions from 60 GtCO2e to less than 30 GtCO2e

in the coming decades. The most accepted way to deal with the

threat of climate change posed by increased CO2 and other

greenhouse gases in the atmosphere is a move over time to a Low-

Carbon Economy; However, it is clear that developed countries

alone cannot sufficiently reduce their emissions to stabilize global

GHG concentrations, and then it will be necessary a shift, also in

developing and emerging economies, towards a low carbon

development path to reduce global GHG concentrations on the

required scale, where the use of different instruments and

capabilities for adaptation and mitigation discussed in this paper

may offer valuable ways of managing climate change issues.

Keywords-component; Low-Carbon Economy; New Market-

based Mechanism Framework for Various Approaches; Mitigation;

Adaptation.

I. INTRODUCTION

Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750 and now far exceed pre-industrial values determined from ice cores spanning many thousands of years [8]. The global increases in carbon dioxide concentration are due primarily to fossil fuel use and land use change, while those of methane and nitrous oxide are primarily due to agriculture [1]. For stabilization of CO

2 concentrations,

even at relatively high levels in the atmosphere, emissions must eventually be no greater than the level of persistence in natural sinks; which the main one is the ocean, it can absorb about 0.1 Giga-tons of carbon per year. For the lowest emissions scenario of the IPCC, anthropogenic emissions of GHG need to fall to zero by 2100 [3].

According to the IPCC report (2007), to prevent the global mean temperature from rising over 3

oC, atmospheric GHG

concentrations must be stabilized at 550 ppm, by 2030. This will require all countries - developed, developing and emerging markets - to reduce annual global emissions from 60 GtCO

2e to

less than 30 GtCO2e

in the coming decades [7]. The most accepted way to deal with the threat of climate change posed by increased CO

2 and other greenhouse gasses in the

atmosphere is a move over time to a Low-Carbon Economy - an economy which produces much lower levels of CO

2 [3].

However, it is clear that developed countries alone cannot sufficiently reduce their emissions to stabilize global GHG concentrations, and then it will be necessary for developing and emerging economies to shift toward a low carbon development path to reduce global GHG concentrations on the required scale as well promote sustainable development and poverty reduction.

II. ECONOMIC TRANSFORMATION MAIN ISSUES

A. Moving to a Low-Carbon Economy

Moving to a Low-Carbon Economy means producing more energy from solar, wind,, hydro, and renewable energy sources including biomass and maybe nuclear. These sources produce either no CO

2 emissions or very low levels of emissions. Also,

besides it includes a much greater reliance on renewable sources of energy [5], where new strategies should focus on energy-efficiency, new materials engineering, better public transportation systems, carbon capture and storage technologies and and the promotion of policies to change behavior among the population.

However, the history of economic transformation follows a familiar path. Dominant technologies and businesses are generally reliable and economical, and over time they develop a network of institutional and political support that effectively resists change. Then, new technologies and businesses generally enter a niche of the broader market, offering a higher cost service that meets specialized needs. Over time the new competitor becomes more economical and widens its share of the market, eventually undercutting the cost of the dominant player and gradually remolding the institutional infrastructure to meet its own needs [2].

Nevertheless the greatest challenge for an economic transformation and a shift toward a low carbon development path would be eventually undercutting the cost of renewable

CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. PRODEMA - Programa de Pós-Graduação em Desenvolvimento e Meio

Ambiente. Universidade Federal do Ceará.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 55

Page 56: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

energy sources, and find ways to fit them into an energy system that was designed around fossil fuels—fuels that have the advantage of being concentrated and easily stored [2]. To seriously de-carbonize the energy economy ways must be found to power everything from transportation to the latest electronics on seemingly ephemeral energy sources such as solar energy and wind power. The ability to integrate new energy sources into the existing energy infrastructure would speed the transition and reduce its costs [2].

Another main issue is related to energy markets which have been shaped more than others by government policy, institutional constraints, and the power of large industrial enterprises [2]. However, simple economic theory provides minimal insights about how to spur change. While, the uncertainty surrounding the future of existing carbon markets in recent years has prevented valuable resources from being channeled to low-carbon investments - particularly from the private sector - Carbon markets have endured challenging years since the global economic crisis of 2008–2009. The subsequent economic downturn led to a significant reduction in industrial activity in some major economies in the years immediately following the economic crisis, as well as falls in greenhouse gas (GHG) emissions in participating economies [4].

B. Instruments for Sustainable Development

There are some feasible approaches to promote market instruments for a Low-Carbon Economy such as: the New Market-based Mechanism (NMM), which refers to an international market mechanism that is set up and governed centrally under the UNFCCC; while the second one, known as the Framework for Various Approaches (FVA), which is a proposal for a framework that would leave it up to countries itself to define their own approaches and methodologies in a decentralized manner. For the NMM, the responsibility for the development of rules and modalities as well as for the governance of the mechanism would lie with the UNFCCC. Two main variants have been proposed: Crediting and Trading; the purpose of the NMM is to provide incentives for mitigation actions in developing countries that go beyond the scale of existing market-based mechanisms under the Kyoto Protocol and could be adapted to any agreement that may arise later. On the other hand, The FVA refers to a general framework at the UNFCCC level providing an umbrella for different national, regional, and multilateral approaches to emission reductions that are implemented in a decentralized manner [5].

An FVA would allow individual countries to design, establish, and implement mechanisms based on their own standards and methodologies that are recognized within the UNFCCC, however, under a common strategy. Two different models on how the FVA could work have being discussed. The first model would allow recognition of units issued by domestic schemes under the condition that they are approved by a UNFCCC body. A second model, in contrast, would not provide approval power to the UNFCCC but, rather, would allow the UNFCCC play a role in providing a platform for exchange of information and providing a general set of common principles [6].

The low demand for carbon credits is currently a barrier to the development of a new market-based mechanism. Since the details of the NMM need to be improved, potential overlaps with other market-based as well as non-market mechanisms need to be addressed as well. Overlaps may exist with existing mechanisms (e.g., CDM) but also with mechanisms under discussion or development (e.g.,REDD+ and FVA). Indicators to measure achievement of results can be qualitative (e.g., contribution to sustainable development) and also quantitative (e.g., emission reductions, kilowatt hours, hectares). While discussions on result-based climate finance are typically focused on the achievement of emission reductions [4], add the universe of human behavior and perception could be a tool that could offer a great contribution.

Naturally a Low-Carbon Economy could greatly benefit on some areas such as taxation; income tax, payroll tax and corporate tax could be offset with the revenue generated by the carbon tax, thus reducing the tax burden on individuals and businesses [9], where a carbon tax would be a primary source of government revenue, as well as an incentive not to pollute, and the revenue windfall of a carbon tax could also be used for the direct support of emerging technologies through the investment tax credit [9], however, it is vitally important to implement mechanisms that avoid stifling economic activities and ensure sustainability over time.

In a time when discussions about whether human activity is responsible for climate change have become outdated, the world moves towards the discussion of impacts, vulnerability of territories and possibilities of adaptation. However, many factors hamper the progress of these discussions, often due to political issues. Apparently, among the different countries and private interests there is a lack of a broad political consensus on how to act [10]. Understanding the impacts of climate change on poverty rates in the developing world and how best to adapt and remain resilient to changes is essential and should include efforts to examine large-scale climate and other global changes (e.g. population growth, economic growth, political conflicts, territorial vulnerability, etc.) to inform decision-makers, accurately and consistently, about the management of socioeconomic and environmental issues [10].

III. CONSIDERATIONS

Naturally different countries have different capacities to engage in a Low-Carbon Economy and a real new pattern of sustainable development. It is needed then, to re-balance, in comparison to this new approach, traditional carbon-intensive economy until a time when the new strategies adopted would be reviewed and then improved. As a result, countries should be encouraged to engage in an appropriate pattern of development where competitiveness concerns could also be managed at a higher level through design conditions - International cooperation as well taxation may offer, in part, a solution to these concerns. However, there are several risks ahead: current international negotiations have shown that progress is slow and there is tangible risk of lack of commitment. Equally problematic is that ambition levels are reasonable but the environment for agreements and decision-

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 56

Page 57: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

making process can be extremely challenging and we, probably, do not have enough time left to tackle the climate challenge in all its variants.

The time is running and delays on move to a Low-Carbon Economy in a world where GHG emissions rises fast will pose a major challenge due to an increase in population and its increasing demand for energy and other resources consumption, causing it to become more expensive to forge agreements on including the costs of the CO

2 externalities into

the pricing of energy and consumption.

However, it is possible to admit that governments and sectorial investments could act as a catalyst for private investments by creating a proactive framework that would improve investments and the acceptance of costs and risks by companies and the society enabling reliable results in time. We see private initiatives as essential to implement mitigation and adaptation activities, but they need to be started outside the market. A variety of policies implemented by governments, could act together, both directly and indirectly, in prices, consumption and GHG emissions.

In that respect, the market for project-based emission reductions could be an important catalyst for low-carbon investment in several countries by complementing and leveraging other resources toward climate-smart development; such initiatives need to be implemented at the sub-national, regional, national and international levels.

ACKNOWLEDGMENT

I would like to express my gratitude towards Professor Edson Vicente da Silva for his guidance and constant supervision as well as for providing necessary information regarding this project and also for his support. Thanks to CAPES and PRODEMA/UFC as well.

REFERENCES

[1] IPPC.A report of working group I of the intergovernmental panel for policymakers. In:< http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf>.Accessed: 20/07/2014. 2007.

[2] The Worldwatch Institute. State of the world: Innovations for a sustainable economy for a sustainable economy. 25th Anniversary Edtition. In: < http://www.worldwatch.org/files/pdf/SOW08_chapter_1.pdf>. Accessed: 20/10/2013. 2008.

[3] Houghton, J. Global Warming: The Complete Briefing (4th Edition), Cambridge University Press, 2009 reprinted 2012 (pp 305, 316, 399)

[4] The World Bank Group. Brazil low-carbon country case study. The Energy Sector Management Assistance Program (ESMAP). In:<http://siteresources.worldbank.orgf >. Accessed: 20/10/2014. 2010.

[5] The World Bank. Mapping carbon pricing initiatives – Development and prospects. ECOFYS. Carbon finance at the world banck.. 77955. Washington D.C. In:<http://www-wds.worldbank.org/e>. Accessed: 10/08/2014.

[6] UNFCCC, Decision 1/CP.18, 51, 28 February 2013.

[7] Kollmuss, A., Fuessler, J, New Climate Mitigation Market Mechanisms: Stocktaking after Doha, Ministry of Infrastructure and the Environment (I&M)and Federal Office of the Environment (FOEN), 4 March 2013.

[8] IPPC -Climate Change 2007: Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Synthesis Report. Core Writing Team, Pachauri, R.K. and Reisinger, A. (Eds.) IPCC, Geneva, Switzerland. pp 104.In:<http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4_syr.pdf>. Accessed:20/08/2014. 2007

[9] Raush S.; Reilly, J. Carbon Tax Revenue and the Budget Deficit: A Win-Win-Win Solution? MIT Joint Program on the Science and Policy of Gobal Change. Report no 228. August 2012.

[10] Costa, C.G.F. Climate change and the threat of poverty. Outreach.. SIDS. Sustainable Economic Development in Samoa. Empowering Pacific women through marketplaces. Available:< http://www.stakeholderforum.org/fileadmin/files/Outreach_SIDS%20Day%204_SustainableEconomicalDevelopment.pdf>. 1 September 2014.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 57

Page 58: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Session 3: Operation, Optimization and Servicing Password Security Enhancement by Characteristics of Flick Input with Double Stage C.V. Filtering (Author: Nozomi Takeuchi, Ryuya Uda) Performance Evaluation of Cloud E-Marketplaces using Non Preemptive Queuing Model (Authors: A.O. Akingbesote, M.O Adigun, S.S Xulu, E. Jembere) Comparative Analysis of Sparse Signal Recovery Algorithms based on Minimization Norms (Authors: Hassaan Haider, Jawad Ali Shah, Usman Ali)

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 58

Page 59: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Password Security Enhancement by Characteristics of

Flick Input with Double Stage C.V. Filtering

Nozomi Takeuchi

School of Computer Science

Tokyo University of Technology

Hachioji, Tokyo, Japan

[email protected]

Ryuya Uda

School of Computer Science

Tokyo University of Technology

Hachioji, Tokyo, Japan

[email protected]

Abstract—Passwords for locking smart phones are usually

exposed to the menace of shoulder surfing or smudge attacks.

The passwords can be glanced at when being inputted since

smart phones are usually used in a public space. Moreover, the

whole characters of a password can be presumable, even if

attacker does not look at the screen of a phone well since the

arrangement of software keys is fixed and passwords are usually

short. In this paper, we propose a method for enhancement of

security of the password by applying characteristics of a person

which can be captured in flick input. When the method is

applied, the correct password is hard to be found by chance since

attackers cannot distinguish rejection of wrong password from

that of wrong characteristics. The method in this paper is an

improved method in exiting papers. The method is applicable

irrespective of the number of registered users.

Keywords-Personal Identification; Flick Input; Smart Phone

I. INTRODUCTION

Password identification methods for locking smart phones are poor although significant information with personal information often stores in smart phones. Ordinary passwords such as four-digit password etc. can be easily known by shoulder surfing or smudge attacks since smart phones are usually used in a public space.

As a countermeasure of the attack, special hardware for authentication is implemented on some kinds of smart phones. For example, biometrics authentication methods are one of the solutions of the problem. However, the methods require special devices such as fingerprint recognition devices or vein recognition devices which increase the product costs. Moreover, such devices are sometimes uncomfortable in ordinary use. For example, users have to take off their gloves when they put a finger on a fingerprint or vein recognition device. Therefore, smart phones with such devices are still in the minority.

There is an easy way to enhance the security of the lock for smart phones. The longer and more complex the password for the lock is, the more unbreakable the lock is. However, long and complex passwords bother users.

Therefore, we think that it is the best way to enhance security for smart phones if users can be identified without any additional special device or complex password. As a result, we

focus on a touch panel which is invariably implemented on any smart phone in order to identify users.

There are other researches in which touch panels are used for personal identification. However, two of them require a big screen and another one is easy to be broken by tracing a locus of a finger as is mentioned in chapter II. There are also personal identification methods for smart phones with gestures. It makes passwords hard to be broken. However, the hardiness is not enough for personal identification.

As a solution of the problems, we pay attention to flick input on smart phones. When characteristics on flick input are combined with ordinary password authentication method, strength of the password can be increased.

The idea has first presented by Kobata and Terabayashi et al [1]. After that, it is once improved by Kobata et al. [2] by using 3D (Three Dimensional) acceleration sensors to acquire posture of users. Furthermore, it is improved again by us [3] by using 3D acceleration sensors not for not for acquiring posture of users but for increasing the accuracy of personal identification with additional characteristics of users in flick input.

However, there is a critical problem in the method in those three papers. Candidate characters for password decrease according to the increase of the number of registered users. In this paper, we renew the method to solve the problem.

In chapter II, related works are introduced. In chapter III, the proposed method is explained. In chapter IV, evaluation of the method is shown. In chapter V, consideration of the evaluation is mentioned. Finally, in chapter VI, we summarize the paper.

II. RELATED WORKS

In this chapter, existing researches in which characteristics of personal actions are used for personal identification are introduced.

There are researches for personal identification method with inputting characters. One of them is a method with dynamics in key stroke of hardware keyboards by Kasukawa et al [4]. In their research, types both with stable key stroke and with characteristics of individuals are selected among all types so that influence of fluctuation in key stroke can be reduced.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 59

Page 60: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Another one is a method for increase of accuracy on identification with typing hardware keyboards by Ogoshi et al [5]. Keys on a keyboard are intentionally struck with a certain rhythm in their method since difference of key stroke by individuals is small when keys are struck without intention by experts. Strange types are required in order to enlarge the difference for identification.

Anyway, both of the researches require a hardware keyboard which is not mounted on usual smart phones. Moreover, a threat of eavesdropping on typing password with hardware keyboards is pointed out by Zhuang et al [6]. In their research, it is reported that ninety percent of random passwords which consist of five characters can be broken by twenty trials after ten minutes recording of the sound on typing. Furthermore, eighty percent of random passwords can also be broken by seventy five trials, even if the passwords consist of ten characters. Passwords are broken regardless of the proficiency of typing since the eavesdropping takes notice of difference of sound of striking individual keys. In their paper, making noises is proposed as a countermeasure of the eavesdropping.

There are also researches for personal identification with touch panels. One of them is a method by Sawamura et al. for personal identification in which individuals are identified by four types of verification with a multi-touch panel [7]. However, the method has a problem that paths by fingers can be traceable by fingerprints which remain on the special panel since the panel is never touched for purposes except personal identification.

Another one is a method for personal identification with five fingers by Iseri et al [8]. Paths by five fingers from stretched maximally to shortened minimally is used for personal identification in their research. The method also has a problem that a large panel is required for the identification since positions of five fingers must be recorded when the fingers are stretched maximally. The method is not applicable to a small size touch panel which is mounted on a smart phone.

The other method by Sae-Bae et al. [9] is almost the same as a method by Iseri et al. The method by Sae-Bae et al. requires a big screen such as a screen on a 10 inch tablet.

There are also personal identification methods by using a touch panel on a smart phone. Shahzad et al. proposed a method named GEAT for personal identification by gestures with touching on a screen of a smart phone [10]. They evaluated GEAT using 15009 gesture samples which are collected from 50 volunteers. Experimental results show that GEAT achieves an average EER (Equal Error Rate) of 0.5 percent with three gestures using only twenty five training samples. In individual gestures, the FPR (False Positive Rate) of each gesture averaged over all users is always below five percent for a TPR (True Positive Rate) of ninety percent. The critical problem of their method is that it is applied for personal identification. The lock for smart phone can be broken by the rate of five percent. It means that twenty trials by malicious users always break the lock. There is a way that the threshold for FPR decreases such as using combinations of three gestures at once.

Luca et al. also proposed a personal identification method by using a touch panel on a smart phone [11]. The way for the identification is almost the same as the GEAT. However, FPR and TPR are extremely worse than GEAT according to the paper by Shahzad et al. For example, when TPR of the gesture "Swipe left" is 85.11 percent, FPR of the method by Luca et al. is forty eight percent although FPR of GEAT is 5.12 percent. Anyway, both of them are hard to be used for personal identification as is mentioned the previous paragraph. However, it causes that the threshold for TPR also decreases. That is to say, characteristics from gestures cannot be independently applicable for personal identification.

To solve the problems above, Kobata and Terabayashi et al. proposed method for personal identification with a touch panel on a smart phone [1]. Characteristics of individuals on flick inputting are collected and analyzed in the method. In this research, FAR (False Acceptance Rate) is nine percent even when eight characters password is revealed, while FRR (False Rejection Rate) is five percent. If brute force attackers do not know the correct password, they can login with a probability of five percent when they give the correct password by chance in many trials.

After the first proposal, Kobata et al. improve the method [2]. The problem of the previous method is that the accuracy of characteristics on flick input is unstable since users freely make their postures when they use their smart phones. Kobata et al. improve the method with 3D acceleration sensors which acquire posture of users in order to eliminate bad posture such as lying from enrollment of characteristics in flick input of users.

Moreover, we improved the previous method [3]. Kobata et al. apply 3D acceleration sensors to the method only for acquiring posture of users. On the other hand, we apply the sensors for acquiring some more characteristics of flick input of users.

However, there is a critical problem in the method in those three papers. Candidate characters for password decrease according to the increase of the number of registered users. Therefore, we renew the method by using double stage C.V. filtering in order to solve the problem in this paper.

III. PROPOSED METHOD

In this research, we propose a security enhancement method for personal identification on smart phones. The method enhances security when ordinary text-based passwords are inputted by flick input on smart phones. Eleven characteristics in a character are acquired when a user input a character.

A. Acquisition of Characteristics of Flick Input

In our method, fifty three characters can be used for passwords. The characters consist of characters from A to Z, from 0 to 9 and seventeen symbols. In our previous method [1][2][3], characters from A to Z, from 0 to 9 and thirty one symbols are allowed to be used. Screenshots of the implemented software keyboard are shown in Figure 1. The left one is the screenshot of the previous method and the right one is that in this paper.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 60

Page 61: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Figure 1. Screenshots of implemented keyboards.

Alphabetical key arrangement is the same as that on iPhone. Moreover, key arrangement of main symbols is almost the same as that on iPhone. Furthermore, numeric keys are added in blank space. The reason of the change of the key arrangement is that the area which is painted gray in the right screenshot of Figure 1 is hard to be flicked. On real iPhone, the area is used with tapping and not for inputting characters but for functions. Therefore, we eliminate keys on the area so that characteristics of flick input can be acquired stably.

In our previous method, characteristics of flick input are enrolled with pangrams. Pangram is a sentence in which all of alphabetic characters are included. In addition, the thirty one symbols are also enrolled. However, the enrollment bothers examinees. Therefore, in the method in this paper, the enrollment is improved to acquire characteristics of flick input when users freely flick keys.

In our proposed method, there are eleven characteristics which can be acquired from sensors on a smart phone. Characteristics on flick input are stored in a database. Five of the characteristics can be obtained from a touch panel which is mounted on a smart phone. The details of the characteristics are as follows;

1. Migration length on X-axis (DisX) [px]

2. Migration length on Y-axis (DisY) [px]

3. Migration length on X-axis per unit time (VelX) [px]

4. Migration length on Y-axis per unit time (VelY) [px]

5. Touching time (Time) [ms]

Pressure and inclination can also be measured on some kinds of smart phones. However, they are out of use in our method since smart phones with sensors of pressure or inclination are not quite popular. In addition, six characteristics are obtained from 3D (three dimensional) acceleration sensors which are also mounted on a smart phone. The details of the characteristics are as follows;

6. Acceleration on X-axis (AccX0) [m/s2]

7. Acceleration on Y-axis (AccY0) [m/s2]

8. Acceleration on Z-axis (AccZ0) [m/s2]

9. Acceleration on X-axis (AccX1) [m/s2]

10. Acceleration on Y-axis (AccY1) [m/s2]

11. Acceleration on Z-axis (AccZ1) [m/s2]

AccX0, AccY0 and AccZ0 are the values of acceleration when a finger of a user is put on a screen of a smart phone, while AccX1, AccY1 and AccZ1 are the values of acceleration when the finger is released from the screen.

B. Extraction of Candidate Characters

In the method by Kobata and Terabayashi [1][2], a character which is out of range of characteristics of other users is selected as a candidate of password. However, no character might be chosen when there are many users since the outside of the range decreases according to the increase of the number of users. In the method in the previous paper [3], predetermined threshold is set for the extraction of the candidates. However, the threshold also increases according to the increase of the number of users, or FAR (False Acceptance Rate) increases when the number of users increases.

In the proposed method in this paper, characters in a password are selected by double stage C.V. filtering. A character which can be stably inputted and has different characteristics from those of other users is suitable for password. It depends on standard deviation and average of each characteristic of a key such as DisX, DisY, etc. When the value of standard deviation is low, the key can be inputted stably. That is, FRR (False Rejection Rate) is low. When the value of average is highly different from that of others, the key is strongly characteristic. That is, FAR (False Acceptance Rate) is low. However the values of standard deviations are not directly comparable since the number of times of inputting characters is different among characters. Therefore, in our method, the values of C.V. (Coefficient of Variation) are compared instead of the values of standard deviations. C.V. is calculated as shown in Equation (1). C.V. is the value of the division of the square root of the square of the standard deviation by arithmetic mean. "x" in Equation (1) does not mean neither the value of X-coordinate or one inputted character but a common variable.

xVC /.. 2

In the past method [1][2], the area of each characteristic of a key is defined with AVE (average) and STD (standard deviation) such as from (AVE - STD) to (AVE + STD). In the previous method [3], it is redefined as a threshold. When the area of a key is out of the area of the same key which is inputted by other users, the key is selected as a candidate character of a password. However, a key has five or eleven characteristics such as DisX, DisY, etc., and the area decreases according to the increase of the registered users. That is to say, the method cannot be used in practical use.

Therefore, we revise the method with double stage C.V filtering. Characteristics of a key are merged by summing up characteristic of DisX, DisY, etc. as shown in Equation (2).

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 61

Page 62: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

1

...AccZDisYDisX KKKK CVCVCVCV

In Equation (2), "CV" means C.V. and subscript "K" means

a flicked key. The values of DisXKCV ,

DisYKCV , ... 1AccZKCV

are equally valuable since standard deviation is divided by average in order to absorb the difference of the values of amplitude between DisX, DisY, ... AccZ1. In addition, the

absolute value of each DisXKCV ,

DisYKCV , ... 1AccZKCV is

added not to be canceled by each other. When average of each

DisX, DisY, ... AccZ1 is zero, corresponded DisXKCV ,

DisYKCV , ... 1AccZKCV is also set to zero. In our method, a big

value of standard deviation is improbable when the value of average is zero since a finger always moves the same direction when the same character is inputted by flick input. However, it is probable that the value of average is zero when standard deviation is not zero. For example, when a finger moves up vertically, the finger might little bit move horizontally. When the finger horizontally moves plus one millimeter at the first trial, and it horizontally moves minus one millimeter at the second trial, the average is zero although the standard deviation is not zero. Therefore, we eliminate the value of the characteristic from CVK when the value of the average of the characteristic is zero.

Keys are sorted in ascending order of CVK, and they are sorted user by user. When CVK is small, the key is flicked stably. That is, the smaller the CVK, the smaller FRR. This is the first threshold by C.V. filtering for FRR.

Next, we explain the way to decide FAR by C.V. filtering. The strength of characteristics of a key is calculated as shown in Equation (3) and Equation (4).

n

CV

AVE

n

i

iK

CVK

1

),(

K

K

CV

CViK

iKAVE

AVECVCHAR

2

),(

),(

)(

CHAR(K,i) means the strength of characteristics of key "K" of ith user. CV(K,i) means CVK of ith user. That is, CHAR(K,i) is C.V. of the first stage C.V. in Equation (2). This is because why the method in this paper is called "double stage C.V. filtering".

Keys are sorted in descending order of CHAR(K,i), and they are sorted user by user. When CHAR(K,i) is big, the key has strong characteristic. That is, the bigger the CHAR(K,i), the smaller FAR. This is the second threshold by C.V. filtering for FAR.

KCVAVE is almost constant regardless of the number of

registered users when unbiased users are selected. On the other hand, the previous method in the papers [1][2][3] brakes down according to the increase of the number of registered users. The method in this paper is practical and the threshold for both FRR and FAR can be set with the double stage filtering mentioned above.

The filtered characters are candidates for a password. Each character of a password is randomly selected among the candidates.

In our research, the length of each password is set to eight according to the guideline of IPA (Information-technology Promotion Agency, Japan) [12]. Moreover, a password space is determined as is shown in Equation (5) according to the guideline of DoD (Department of Defense) [13].

MAS

S is the size of a password space. A is the number of patterns of characters. M is the length of a password. There are twenty six patterns of characters in alphabets, ten patterns of characters in numeric characters and seventeen patterns of characters in symbols. The size of the password space is smaller than that of the previous method since fifty two alphabet characters from A-Z and a-z, ten numeric characters and thirty one symbols are used in the previous method. However, uppercase and lowercase is changed by a key at the bottom-left of the screen in Figure 1. That is to say, it is the same that one key is added to the password. Therefore, we think it is better that M increases instead of A in Equation (5).

C. Identification by Flick Input

A user input a key of password with flick input in identification phase. When TCVK of the key is under the threshold, the key passed the first filter which is mentioned in section III.B. The threshold is chosen from the value of CVK in Equation (2). TCVK is calculated as shown in Equation (6). Of course, the values of DisX, DisY, etc. in this equation are those of the same user. That is, the values of the owner of the smart phone are used in practical use.

K

K

K

K

K

K

AccZ

AccZK

DisY

DisYK

DisX

DisXK

K

AVE

AVEAccZ

AVE

AVEDisY

AVE

AVEDisXTCV

1

2

1

2

2

)1(...

)(

)(

When TCHAR(K,i) of the inputted key for password is over the threshold, the key passed the second filter which is mentioned in section III.B. The threshold is chosen from the value of CHAR(K,i) in Equation (4). TCHAR(K,i) is calculated as shown in Equation (7). TCV(K,i) means TCVK of the ith user.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 62

Page 63: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

K

K

CV

CViK

iKAVE

AVETCVTCHAR

2

),(

),(

)(

When all of inputted keys in a password are passed the double stage filters, the user is identified as the correct user.

IV. EVALUATION

We evaluate our proposed method with ten examinees in our university. Attributes of examinees is shown in Table I.

TABLE I. ATTRIBUTES OF EXAMINEES

User A B C D E

Phone Type Xperia

acro HD Xperia A

Xperia

Z1 IS05 iPhone 5

Size of Phone 4.3 4.6 5.0 3.4 4.0

Flick Usually No No Sometim

es Usually

Dominant

Arm Right Right Right Right Left

User F G H I J

Phone Type MEDIA

S BR

Xperia

acro HD

Xperia

arc

iPhone

5S

Xperia

Z1 f

Size of Phone 3.6 4.3 4.2 4.0 4.3

Flick No No No Often Usually

Dominant

Arm Right Right Right Left Right

In Table I, examinees are identified by an alphabet from A to J in the attribute "User". The attribute "Phone Type" means the type of a smart phone which each examinee usually uses. The attribute "Size of Phone" means the screen size of each smart phone which is mentioned above. The attribute "Flick" means the frequency of flick input by each examinee. The attribute "Dominant Arm" means the dominant arm of each examinee. The type of a smart phone for the evaluation in this chapter is Xperia acro HD. Each examinee inputs all of characters on our software keyboard about ten to twenty times.

The number of candidate characters for password is shown in Table II.

Heading of column is threshold of CVK in Equation (2), and heading of row is threshold of CHAR(K,i) in Equation (4) in section III.B. Values of the table show the number of candidate characters for password. When horizontal threshold increases, FRR also increases. When vertical threshold decreases, FAR increases.

FAR and FRR are calculated according to the candidate characters for password in Table II. Three typical characters for each cell in Table II are chosen as shown in Table III. In Table III, there are three characters in one cell which corresponds to a cell in Table II. The left one in each cell is the best character for password. The character has the lowest value of CVK in Equation (2). The right one in each cell is the worst character for password. The character has the highest value of CVK. The

center one in each cell is a character which has median (the middle value) of CVK in a set of measurements among the candidates for the cell.

TABLE II. THE NUMBER OF CANDIDATE CHARACTERS FOR PASSWORD

A

13 9 3

0.25 36 26 2

1.25 5 4 1

2.25 3 3 1

B

13 9 3

0.25 24 14 0

1.25 5 3 0

2.25 3 2 0

C

13 9 3

0.25 23 18 0

1.25 5 4 0

2.25 3 2 0

D

13 9 3

0.25 25 20 0

1.25 4 2 0

2.25 2 1 0

E

13 9 3

0.25 35 25 5

1.25 5 3 0

2.25 3 2 0

F

13 9 3

0.25 21 11 0

1.25 4 3 0

2.25 2 1 0

G

13 9 3

0.25 22 11 0

1.25 5 2 0

2.25 3 2 0

H

13 9 3

0.25 26 11 0

1.25 5 1 0

2.25 2 1 0

I

13 9 3

0.25 32 28 8

1.25 2 1 1

2.25 1 1 0

J

13 9 3

0.25 32 26 11

1.25 8 4 2

2.25 4 2 1

In some cells, there is no character in the space for the best

and/or median characters. This is because there are less than three candidate characters in one cell.

FAR and FRR of typical four examinees are shown in Table IV - VII. FAR is separately shown user by user since there is a possibility that characteristics of a user are almost the same as those of the selected user although those of other users are completely different.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 63

Page 64: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

TABLE III. TYPICAL CHARACTERS FOR PASSWORD

A

13 9 3

0.25 8/B/E 8/L/E S/ /E

1.25 J/W/E U/H/E / /E

2.25 W/H/E W/H/E / /E

B

13 9 3

0.25 V/3/E V/7/E / /

1.25 U/H/E U/H/E / /

2.25 H/P/E H/ /E / /

C

13 9 3

0.25 "/F/E ?/F/E / /

1.25 K/H/E K/H/E / /

2.25 H/ /E H/ /E / /

D

13 9 3

0.25 ?/#/P ?/_/H / /

1.25 "@"/H/P U/ /H / /

2.25 H/ /P / /H / /

E

13 9 3

0.25 Y/Q/E Y/$/E Y/\/Z

1.25 A/W/E U/H/E / /

2.25 W/H/E H/ /E / /

F

13 9 3

0.25 0/?/E 0/Y/H / /

1.25 K/H/E K/U/H / /

2.25 H/ /E / /H / /

G

13 9 3

0.25 ~/O/E ~/V/E / /

1.25 K/J/E H/ /E / /

2.25 J/H/E H/ /E / /

H

13 9 3

0.25 "/R/E "/!/H / /

1.25 7/U/E / /H / /

2.25 H/ /E / /H / /

I

13 9 3

0.25 9/C/P 9/S/P \/S/K

1.25 J/ /P / /P / /

2.25 / /P / /P / /

J

13 9 3

0.25 V/7/E V/L/E I/L/H

1.25 K/J/E K/H/E U/ /H

2.25 J/P/E H/ /E / / H

V. CONSIDERATION

Scores of examinee J are good as shown in Table IV. She usually uses the same size smart phone as the phone for this evaluation. Moreover, she usually inputs characters by flick input as shown in Table I. The rate of FRR is from 10.0 to 40.0, while almost all values of FAR are 0.00, when threshold of CVK is nine in Table IV.

Attributes of examinee A are almost the same as examinee J. He usually uses XPERIA acro HD which is the same as the phone for this evaluation. Moreover, he usually inputs characters by flick input as shown in Table I. Almost all values

of FAR are also 0.00 when threshold of CVK is nine. However, the values of FRR are from 18.2 to 54.6. That is, examinee J can more stably input characters than examinee A.

TABLE IV. FAR AND FRR OF EXAMINEE J (GOOD SAMPLE)

FAR[%] 13 9 3

A

0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.25 18.2 60.0 0.00 18.2 0.00 0.00 0.00 0.00

2.25 60.0 63.6 0.00 0.00 0.00 0.00

B

0.25 0.00 0.00 39.1 0.00 8.33 39.1 9.09 8.33 0.00

1.25 21.7 70.4 39.1 21.7 0.00 39.1 0.00 0.00

2.25 70.4 33.6 39.1 0.00 39.1 0.00

C

0.25 7.69 0.00 27.3 7.69 0.00 27.3 0.00 0.00 0.00

1.25 18.2 26.9 27.3 18.2 0.00 27.3 0.00 0.00

2.25 26.9 27.3 27.3 0.00 27.3 0.00

D

0.25 5.88 0.00 22.2 5.88 0.00 22.2 0.00 0.00 0.00

1.25 0.00 42.9 22.2 0.00 0.00 22.2 0.00 0.00

2.25 42.9 34.8 22.2 0.00 22.2 0.00

E

0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.25 15.4 51.6 0.00 15.4 0.00 0.00 0.00 0.00

2.25 51.6 37.5 0.00 0.00 0.00 0.00

F

0.25 0.00 0.00 12.5 0.00 0.00 12.5 0.00 0.00 0.00

1.25 5.88 29.6 12.5 5.88 0.00 12.5 0.00 0.00

2.25 29.6 13.8 12.5 0.00 12.5 0.00

G

0.25 8.33 0.00 14.3 8.33 0.00 14.3 0.00 0.00 0.00

1.25 14.3 50.0 14.3 14.3 0.00 14.3 0.00 0.00

2.25 50.0 30.0 14.3 0.00 14.3 0.00

H

0.25 5.56 0.00 0.00 5.56 0.00 0.00 0.00 0.00 0.00

1.25 5.56 66.7 0.00 5.56 0.00 0.00 0.00 0.00

2.25 66.7 60.0 0.00 0.00 0.00 0.00

I

0.25 0.00 0.00 33.3 0.00 0.00 33.3 0.00 0.00 0.00

1.25 0.00 64.0 33.3 0.00 0.00 33.3 0.00 0.00

2.25 64.0 42.9 33.3 0.00 33.3 0.00

FRR[%]

13 9 3

0.25 33.3 30.0 10.0 33.3 40.0 10.0 40.0 40.0 40.0

1.25 30.0 16.7 10.0 30.0 40.0 10.0 90.0 40.0

2.25 16.7 20.0 10.0 40.0 10.0 40.0

TABLE V. FAR AND FRR OF EXAMINEE I (NORMAL SAMPLE)

FAR[%] 13 9 3

A

0.25 18.2 90.0 0.00 18.2 9.09 0.00 16.7 9.09 0.00

1.25 46.7 0.00 0.00

2.25 0.00 0.00

B

0.25 13.6 100 23.3 13.6 0.00 23.3 0.00 0.00 4.35

1.25 40.7 23.3 23.3

2.25 23.3 23.3

C

0.25 0.00 100 9.09 0.00 0.00 9.09 0.00 0.00 0.00

1.25 30.8 9.09 9.09

2.25 9.09 9.09

D

0.25 0.00 92.9 4.35 0.00 0.00 4.35 0.00 0.00 0.00

1.25 57.1 4.35 4.35

2.25 4.35 4.35

E

0.25 0.00 20.0 25.0 0.00 0.00 25.0 0.00 0.00 0.00

1.25 61.3 25.0 25.0

2.25 25.0 25.0

F

0.25 17.7 94.4 13.8 17.7 0.00 13.8 0.00 0.00 0.00

1.25 11.1 13.8 13.8

2.25 13.8 13.8

G

0.25 0.00 57.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.25 16.7 0.00 0.00

2.25 0.00 0.00

H

0.25 0.00 100 50.0 0.00 0.00 50.0 0.00 0.00 0.00

1.25 76.2 50.0 50.0

2.25 50.0 50.0

J

0.25 0.00 100 0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.25 50.0 0.00 0.00

2.25 0.00 0.00

FRR[%]

13 9 3

0.25 22.7 37.5 28.6 22.7 41.2 28.6 19.1 41.2 63.2

1.25 28 28.6 28.6

2.25 28.6 28.6

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 64

Page 65: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

TABLE VI. FAR AND FRR OF EXAMINEE F (BAD SAMPLE)

FAR[%] 13 9 3

A

0.25 16.7 83.3 18.2 16.7 22.2 80.0

1.25 100 80.0 18.2 100 90.9 80.0

2.25 80.0 18.2 80.0

B

0.25 82.6 57.7 91.3 82.6 29.2 21.7

1.25 91.3 21.7 91.3 91.3 91.7 21.7

2.25 21.7 91.3 21.7

C

0.25 85.7 62.5 18.2 85.7 76.9 40.0

1.25 36.4 40.0 18.2 36.4 55.6 40.0

2.25 40.0 18.2 40.0

D

0.25 7.69 25.0 5.56 7.69 18.8 0.00

1.25 6.67 0.00 5.56 6.67 0.00 0.00

2.25 0.00 5.56 0.00

E

0.25 0.00 17.4 4.00 0.00 0.00 54.6

1.25 65.4 54.6 4.00 65.4 43.5 54.6

2.25 54.6 4.00 54.6

G

0.25 15.4 6.25 0.00 15.4 8.33 6.25

1.25 0.00 6.25 0.00 0.00 0.00 6.25

2.25 6.25 0.00 6.25

H

0.25 27.8 31.8 38.9 27.8 47.1 31.3

1.25 50.0 31.3 38.9 50.0 47.4 31.3

2.25 31.3 38.9 31.3

I

0.25 30.8 73.7 59.3 30.8 79.0 21.7

1.25 0.00 21.7 59.3 0.00 89.5 21.7

2.25 21.7 59.3 21.7

J

0.25 50.0 80.0 30.0 50.0 100 50.0

1.25 60.0 50.0 30.0 60.0 10.0 50.0

2.25 50.0 30.0 50.0

FRR[%]

13 9 3

0.25 23.5 22.2 18.8 23.5 30.0 18.8

1.25 17.7 18.8 18.8 17.7 11.1 18.8

2.25 18.8 18.8 18.8

TABLE VII. FAR AND FRR OF EXAMINEE E (SPECIAL SAMPLE)

FAR[%] 13 9 3

A

0.25 11.1 40.0 100 11.1 0.00 100 11.1 0.00 0.00

1.25 100 100 100 9.09 30.0 100

2.25 100 30.0 100 30.0 100

B

0.25 0.00 40.9 100 0.00 0.00 100 0.00 0.00 0.00

1.25 88.5 74.2 100 33.3 73.9 100

2.25 74.2 73.9 100 73.9 100

C

0.25 0.00 33.3 63.6 0.00 0.00 63.6 0.00 0.00 0.00

1.25 86.4 72.2 63.6 22.2 20.0 63.6

2.25 72.2 20.0 63.6 20.0 63.6

D

0.25 0.00 18.8 44.4 0.00 0.00 44.4 0.00 0.00 0.00

1.25 77.3 63.2 44.4 0.00 6.67 44.4

2.25 63.2 6.67 44.4 6.67 44.4

E

0.25 0.00 41.2 75.0 0.00 0.00 75.0 0.00 0.00 0.00

1.25 96.2 75.0 75.0 11.1 18.8 75.0

2.25 75.0 18.8 75.0 18.8 75.0

F

0.25 0.00 0.00 42.9 0.00 0.00 42.9 0.00 0.00 0.00

1.25 52.6 52.4 42.9 6.25 0.00 42.9

2.25 52.4 0.00 42.9 0.00 42.9

G

0.25 0.00 47.1 61.1 0.00 0.00 61.1 0.00 0.00 0.00

1.25 94.7 95.0 61.1 15.8 6.25 61.1

2.25 95.0 6.25 61.1 6.25 61.1

H

0.25 0.00 0.00 77.8 0.00 0.00 77.8 0.00 0.00 0.00

1.25 91.1 87.0 77.8 0.00 0.00 77.8

2.25 87.0 0.00 77.8 0.00 77.8

J

0.25 0.00 0.00 80.0 0.00 0.00 80.0 0.00 0.00 0.00

1.25 100 81.8 80.0 0.00 10.0 80.0

2.25 81.8 10.0 80.0 10.0 80.0

FRR[%]

13 9 3

0.25 27.8 17.4 12.0 27.8 17.7 12.0 27.8 30.0 56.5

1.25 23.5 14.8 12.0 26.1 22.7 12.0

2.25 14.8 22.7 12.0 22.7 12.0

Scores of examinee I are normal as shown in Table V. The rate of FRR is from 28.6 to 41.2, while almost all values of FAR are 0.00, when threshold of CVK is nine. Her score seems

good although she often, not usually, inputs characters by flick input as shown in Table I. One of the reasons is thought that her dominant arm is different from that of the most examinees. Examinee E and I are lefty as shown in Table I. Movement of their thumb is inverted horizontally when it is compared with the movement of thumb of other examinees.

Examinee E is lefty and she usually inputs characters by flick input as shown in Table I. However, her score is completely different from examinee I who is also lefty. The worst FRR value of examinee E is 56.5 and the average of her FRR is 21.2 in Table VII, while the worst FRR value of examinee I is 63.2 and the average of her FRR is 31.9 in Table V. The reason seems that examinee E flicks more frequently than examinee I. On the other hand, FAR values of examinee I are high in Table VII although the most of the values of examinee E are 0.00 in Table V. That is to say, some examinees whose dominant arms are left can have strong characteristics since dominant arms of the majority are right. On the other hand, some examinees whose dominant arms are left input characters with weak characteristics. They can input characters with the thumb of the left hand like a user who inputs characters with the thumb of the right hand.

The score of examinee J is a good sample. Almost all values of FAR are 0.00 when threshold of CVK is three in Table IV. Moreover, the most of values of FAR are quite low. Furthermore, the value of FRR in Table IV is 40.0 at most. Thus, examinees can be identified effectively with the proposed method, when the examinee usually flicks and the size of the phone for evaluation is the same as the size of his/her phone.

On the other hand, the score of examinee F is a bad sample. Some of the values of FAR are 100 and there are many values of high percentage in Table VI. However, the values of FRR are not so high since the movement of his flick is unstable. That is to say, unstable movement increases the range of acceptance for identification. Moreover, the unstable movement also increases the range of acceptance for others. In short, the increase of the acceptance range increases FAR. Thus, examinees cannot be identified effectively when the examinee never flicks and the size of the phone for evaluation is different from the size of his/her phone.

VI. CONCLUSIONS

The proposed method is suitable for a user who usually inputs characters with flick input. Values of FAR are from 0.00 to 39.1 when threshold of CVK is nine in Table IV, while values of FRR are from 10.0 to 40.0. On the other hand, some of users have little characteristics so that the values of FAR are high as is shown in Table VII. However, the proposed method is applicable since the values of FRR are low when users can input characters stably. The method is also applicable to a user who never flicks since FRR can be kept in low. The proposed method is not a personal identification method which is used independently but a method which is used combined with an ordinary password method. If the correct user is rejected when the correct password is inputted, he/she can try the inputting again and again. That is, the method does not bother him/her when FRR is not so high. On the other hand, malicious users

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 65

Page 66: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

cannot know whether the inputted password is correct or not when they once reach the correct password with a brute force attack or a dictionary attack. The proposed method is useful since user can use ordinary password method for the lock of their smart phone while security of the password method is enhanced with the proposed method. In this paper, algorithms for personal identification are improved. With the improved method, the values of FAR are constant even if the number of registered users increases.

ACKNOWLEDGEMENT

This work was supported by MEXT/JSPS KAKENHI Grant Number 26870660.

REFERENCES

[1] S. Kobata, Y. Terabayashi and R. Uda, Proposal of Method for Personal Identification with Flick Input, In Proceedings of the 15th IASTED International Conference on Control and Applications (Honolulu, USA, August 26-28, 2013), CA 2013, IASTED, Calgary, pp.136-142, 2013.

[2] S. Kobata, R. Uda and S. Tezuka, Personal Identification by Flick Input with Acceleration Sensor, In Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication (Siem Reap, Cambodia January 9-11, 2014), ICUIMC 2014, ACM, P2-15, 2014.

[3] N. Takeuchi, S. Kobata and R. Uda, Improvement of Personal Identification by Flick Input with Acceleration Sensor, In Proceedings of the 1st IEEE International Workshop on User Centered Design and Adaptive Systems (Vasteras, Sweden July 21-25, 2014), UCDAS 2014, IEEE, To be published.

[4] M. Kasukawa, Y. Mori, K. Komatsu, H. Akaike and H. Kakuda, An Evaluation and Improvement of User Authentication System Based on Keystroke Timing Data, IPSJ Journal, Vol.33, No.5, pp.728-735 1992. (Japanese)

[5] Y. Ogoshi, A. Hinata, S. Hirose and H. Kimura, Improving User Authentication Based on Keystroke Intervals by Using Intentional Keystroke Rhythm, IPSJ Journal, Vol.44, No.2, pp.397-400 2003. (Japanese)

[6] L. Zhuang, F. Zhou and J. D. Tygar, Keyboard Acoustic Emanations Revisited, In Proceedings of the 12th ACM Conference on Computer and Communications Security (November, 2005), CCS'05, ACM, New York, NY, pp.373-382 2005.

[7] T. Sawamura, M. Narita, N. Noji, T. Katoh, B. B. Bista and T. Takata, A Proposal of an Authentication Method Using Multi-Touch Screen, In Proceedings of the Computer Security Symposium 2010 (Okayama, Japan, October 19-21, 2010), CSS 2010, IPSJ, Tokyo, pp.645-650 2010. (Japanese)

[8] H. Iseri and E. Okamoto, A study of touch panels biometric based on behavioral characteristics, In Proceedings of the Computer Security Symposium 2011 (Niigata, Japan, October 19-21, 2011), CSS 2011, IPSJ, Tokyo, pp.84-88, 2011. (Japanese)

[9] N. Sae-Bae, K. Ahmed, K. Isbister and N. Memon, Biometric-rich gestures: a novel approach to authentication on multi-touch devices, In Proc. of the SIGCHI Conference on Human Factors in Computing Systems (CHI'12), ACM, pp.977-986, 2012.

[10] M. Shahzad, A. X. Liu and A. Samuel, Secure unlocking of mobile touch screen devices by simple gestures: you can see it but you can not do it, In Proc. of the 19th annual international conference on Mobile computing & networking (MobiCom'13), ACM, pp.39-50, 2013.

[11] A. D. Luca, A. Hang, F. Brudy, C. Lindner and H. Hussmann, Touch me once and I know it's you!: implicit authentication based on touch screen patterns, in Proc. of the SIGCHI Conference on Human Factors in Computing Systems (CHI'12), ACM, pp.987-996, 2012.

[12] IT Security Center, Information-technology Promotion Agency, Japan (IPA/ISEC), Reporting Status of Vulnerability-related Information about Software Products and Websites - 3rd Quarter of 2008 (July - September) -, http://www.ipa.go.jp/files/000018119.pdf

[13] R. L. Brotzman, Password Management Guideline, Department of Defense (April 1985), Computer Security Center, U.S.A., 1985.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 66

Page 67: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Performance Evaluation of Cloud E-Marketplaces

using Non Preemptive Queuing Model

A.O Akingbesote, M.O Adigun, S.S Xulu, E. Jembere Department of Computer Science

University of Zululand, X1001,

KwaDlangezwa, 3886, South Africa

[email protected]

Abstract— With the drift of consumers to the Cloud e-

marketplaces looking for affordable and cost effective services,

waiting time is of interest to every consumer and also a key

source of competitive advantage for any cloud e-market provider.

Keeping consumers’ waiting may incur high costs of consumer’s

dissatisfaction, such as loss of future business and actual

processing costs of complaints. The evaluation of performance

impact on consumers’ waiting time has not been fully addressed

in the context of differentiated service provisioning. In this

research, we conducted a modelling and evaluation of a typical

cloud e-marketplace using two classes of consumers under non

preemptive priority discipline. We studied and compared the

performance impact of these two classes in regard to the non-

priority discipline. Our approach used the analytical and

simulation model. Early results revealed that the average total

waiting time is independent of the service discipline.

Furthermore, unlike the Non-Priority where the service

distribution of the two classes are almost equal when the server

utilization increases, the class 1 consumers’ waiting time

approaches finite limit and that of class 2 consumers deviated

slightly from the average total waiting time under the Non

preemptive Priority.

Keywords— E-marketplaces; Consumers, Waiting Time Non-

Preemptive priority

I. INTRODUCTION

E-marketplaces can be referred to as the virtual

environments for buying and selling of services [1]. It differs

from the traditional marketplaces in that the business

transactions occur with the use of communication networks

without the clients and the producers seeing each other. This

virtual, dynamic and real time platform allows the consumers

or the clients and the producer to have a better communication

with the use of the Internet technologies. Hence, e-

Marketplaces are regarded as one important part of e-business

solutions in the process of enabling supply chain integration to

maintain the business value and growing competitive necessity

[2]. The definition of our consumers is line with [3] as

applications requesting service from the provider.

These markets have witnessed some remarkable evolution

among which is the emergence of Service Oriented

Architecture (SOA). SOA refers to the paradigm of

organizational modelling of systems aimed at composing large

business operations from existing services. This evolution was

an update to object-oriented computing. Closely associated

with SOA is the emerging idea of Web services e-

marketplaces. One important benefit of this idea is

communication improvement between the producers and consumers. The free flow of information between the service

stakeholders means that competitive markets exist that

promote the potential composition of differentiated products

from existing components; and therefore giving the consumers

the opportunity to have a multi-level based strategy for

selecting services [4].

The quest to address challenges like the high costs of

maintaining equipment and human resources brought about

utility computing concept as first envisioned by scholars like

Herb Grosch in 1950s and John McCarthy in 1960s. The

current trend towards cloud E-marketplaces is a natural

evolution of services computing into a model that consists of services provisioning similar to household utilities such as

electricity, gas, and water. One basic characteristic of this

provisioning model is that the users consume resources and

are billed according to their personal demand [5]. Three major

characteristics sets apart the cloud e-marketplaces from the

traditional e-marketplaces. These are the provisioning of on

demand services used, elasticity, and the management of the

services by Cloud providers [6].

While the issues of Cost Efficient, Almost Unlimited

Storage, Backup and Recovery, Automatic Software

Integration, Easy Access to Information and Quick Deployment have been the benefits of using the services of the

cloud e-marketplaces, that of Performance, Security and the

vulnerability to external hack attacks and threats remains the

major challenges that have not been fully addressed [7] [8] [9].

Other issues are the geographically distributed nature of data

centers in a cloud e-marketplace environment and the

architectural shift to container-based data centers which have

posed new challenges in the design, deployment and

management of cloud computing platforms. All these

challenges may inadvertently impact on the performance of

cloud e-marketplaces.

Much attention of researchers in the area of Cloud e-marketplaces have been directed towards implementation

while less attention has been given to performance related

issues [10][6]. With the current drift of consumers to the

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 67

Page 68: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Cloud e-marketplaces for affordable and cost effective

services under different service provisioning scenarios,

response time is of interest to every consumer and also a key

source of competitive advantage for any cloud e-market

provider. It is common academic knowledge that a low level

of service may be inexpensive, at least in the short run, but may incur high costs of consumer’s dissatisfaction, such as

loss of future business and actual processing costs of

complaints. A high quality service level will cost more to e-

cloud provider but will result in lower dissatisfaction costs

[11]. Therefore, the need for proper evaluation of the

performance impact on consumers’ waiting time is imperative.

The goal of this research is to evaluate the performance of

cloud e-marketplace in the context of different service

provisioning disciplines. In this particular research, we present

the modelling of a typical Cloud e-market place as network of

queues under two classes. We assigned a higher priority to

class 1 and a lower one to the other class. To achieve our

research goal, a mathematically analytical and simulation

approach was adopted.

The remainder of this paper is organized as follows. Section

II discusses the related work. Section III introduces our

analytical model description with the numerical and

simulation set up. In Section IV, we have our results and

discussion. We have the conclusion in Section V.

II. LITERATURE REVIEW

This work builds on what researchers have done in the area of performance modeling. Nevertheless, most of these works

have been on networks. See [12] [13]. On the cloud

performance issue, for example, in [14], the authors use the

M/M/1 model where the cloud model consists of only one server. Unfortunately, generalisations that are valid under

networking may not be realistic in the current cloud e-

marketplace situation. In [15], the authors model the cloud as

series of queues with each service station as M/M/1 for

optimal resource allocation. This is done by modeling a

typical cloud e-market as three concatenated queuing systems,

which are schedule queue, computation queue and

transmission queue. They then theoretically analyzed the

relationship between the service response time and the

allocated resources in each queuing system. The work of [7]

uses the M/G/c to evaluate a cloud server firm with the assumption that the numbers of server machines are not

restricted. The result of the work demonstrates the manner in

which request response time and number of task in the system

may be assessed with sufficient accuracy. While all these

works have achieved better results toward accurate waiting

time performance, these could only be used where the service

provisioning discipline is the same.

In [16], the authors use discrete time preemptive priority to

analyze two classes; they considered two classes of customers

which have to be served under high and low priority. This

work is based on the theoretical concept that shows the

influence of the priority discipline and service time distribution on the performance measure through numerical

examples. In [17] the authors considered the waiting time

queue in the accumulated priority queue in a single-server

system with Poison arrival and general service times. The

work derived the waiting time distribution rather than mean

waiting time. All the research endeavors discussed this far

modeled their works based on M/M/1/Pr which may not reflect a typical cloud e-market model. In a cloud e-

marketplace, consumers go through a series of queues before

getting to their destination.

Our work is closely related to that of [12] whose work is

based on performance analysis of two priority queues which

work in tandem. The author created two dedicated servers to

serve each priority class in the networks of queues. The

argument in the work is supported with numerical validation.

While this model achieved a better performance especially in

the context of telecommunication networks; still we envisage

a setback in cloud e-marketplace mainly when no consumer

arrives from either of the classes. That implies that the dedication towards server will be idle and much load will be

directed on the other which may have effect on cost. In our

work, we removed this bottleneck by

Allowing each of our service station is model as

M/M/c/k as against the M/M/1

Allowing any server to serve any application. i.e no

dedicated server to any incoming application

Introducing a feedback database server which

collates the statistical information to determine when

scaling up or down of servers is needed which to the

best of our knowledge has not appeared in the

literature in the context of different service

provisioning.

III. THE ANALYTICAL MODEL DESCRIPTION

The proposed model is shown in Fig 1. The model considers

two priority classes as network of queues as shown in Fig 1.

We consider the arrival entry of class 1 consumers with arrival

rate and class 2 with arrival rate . Consumer requests are

transmitted to dispatcher-In web queue and this is then

dispatched to any of the web service stations with equal

probability distribution for processing by the server machines.

Getting to any of these stations, another queue is built up in

any of the service station after which the consumer’s request

moves out through the Dispatcher-Out queue. The idea is that

when consumer of lower priority (Class2) is on the queue and

that of higher priority (Class 1) arrives, the higher one goes to

the line ahead of the lower one. This model is non-

preemptive, that is whatever the priority of a consumer is in service; it has to complete its service before another is

admitted and any priority of the same Class value is based on

First Come First Served (FCFS). Also arrival and service time

follow exponential distribution with an infinite population.

Another assumption in this network is in line with [15] that

the latency of internal communication between the Dispatcher-

In, the web queue service stations and Dispatcher-Out is

insignificant. To get our performance measure, we followed

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 68

Page 69: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

the steps stated in [18] [19] and the law of conservation of

flow [20].

A. Mathematical Model

We model the dispatcher-In and the Dispatcher-Out as

M/M/1/Pr and the Web queue stations as M/M/c/Pr. For

clarity purposes, we define the following terms.

ρ = Server utilization i.e the percentage of the time the

system is busy.

= Service rate in Dispatcher-In, each of the web queues

and Dispatcher-Out

P0 = Probability of no consumer in the system

Pn = Probability of n consumers in the system

= Mean numbers of consumers in the system for each

of the two priority classes in the dispatcher queue.

= represent the mean queue lengths of the two

classes in the dispatcher queue.

= waiting time in the queue by each class in the dispatcher queue.

= Expected total waiting time experienced by the two

classes in each service station.

= Average waiting time experienced by the two

classes in the service stations

The joint generating function for two priorities

regardless of the one in service

A. Dispatcher-In Queue

We first model the M/M/1/Pr for the Dispatcher-In web

queue.

Let and

Pr{ at time t, m units of Class 1 and n unit of Class

2 consumers are in the Dispatcher-In and consumers of Class r

= 1 or 2 in service} and

and

The difference stationary differential equations are

(1)

The changing of service discipline has no effect on (i.e Probability of idleness) therefore

and

The busy percentage time of consumer with class r is

and

and

Because of the triple subscripts, we use the two dimensional

generating functions. Therefore, we define

and

=

Where represents the generating function for the two

priorities regardless of the service.

Representing as the mean number of consumers

present for each of the two priorities classes of the dispatcher

queue, then

Where represent the mean queue length of

class 1 and 2 in the dispatcher queue.

Let Eq. 1-9 be represented as Eq. X, then multiplying Eq. X and appropriate power of y and z and then sum it up will give

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 69

Page 70: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

and

The values of need to be known in

order to have fully the generating functions of

respectively. This is done by summing times

equation 1 that involves .

Substitute this into Eq. 23 and 24 then become a

function of , therefore,

Since the condition that then,

Taking the partial derivative of H with respect to both y and z

and then evaluate to find the means measure of effectiveness

i.e where the surfaces.

B. Web Station Queue

Unlike the M/M/1/Pr, the M/M/c/Pr is governed by identical

exponential distributions for each priority at each of the c

channels within a station. As earlier said, our service rate is equal.

We define

and

Where the system is stationary for , and

Where is the time required to serve consumers of the

kth priority in the line ahead of the consumer. is the service

time of the consumers of priority k which arrive during

is the amount of time remaining until the next server

becomes available.

Therefore

=

but

(40)

Therefore

The expected time taken in a service station is

The overall average time taken in j service stations is

C. Dispatcher-Out

Our Dispatcher –Out is similar to the Dispatcher-in therefore

the performance measures are given below

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 70

Page 71: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Our concern in this experiment is the waiting time experienced

by consumers. Therefore the total waiting time experienced in

the queue by consumer is

B. Numerical Validation and Simulation

The illustration of this model is shown by numerical

examples and the impact on the performance of the priority

two classes are analyzed under the results and discussion

session. We consider two arrival processes where

we set and

c = 2 in each of the service stations.

We simulate this model using Arena Discrete event

simulator with the same values to ascertain the degree of

variability. This simulation was run with replication length of

1000 in 24 hours per day with base time in hours and it was

replicated 5 times. In addition, we set up similar experiment with the same configurations but under a non-priority

discipline using FCFS discipline and the comparison result is

also discussed in section four.

IV. RESULTS AND DISCUSSION

Our first result is shown in Fig 2 and Table 1. These

represent non- pre-emptive waiting time of consumers in our

analytical and our simulation model. The result reveals that

the analytical approach is in line with our simulation. The

graph in Fig 3 shows the two classes as functions of

consumer’s waiting time and service utilization (ρ). From this

non preemptive graph, we observe that as ρ 1, the waiting

time of class 2 consumers ∞. That is, it increases such that

the level of deviation from the total waiting time is very close

unlike the class 1 consumers where only a small change is

experienced with almost a finite limit.

In Fig 4, the consumer total waiting time is added. In this

Fig, we notice that the class 2 consumers’ waiting time is very

close to the total waiting time. That implies that the class 1

consumers spent few time at the expense of the class 2

consumers. For example, where the total waiting time is

1.114846 in the table 1 and Fig 4, the waiting time of class 1

consumer is 0.040076 while that of class 2 is 1.074769 which

is almost close to the total waiting time. This is a great danger

as this may bring lots of consumer dissatisfaction in costs and

also reduce the patronage of consumers to the Cloud provider.

Table 2 is the non-priority discipline and we use the FCFS.

Our observations reveal that the total waiting time in the

simulation of Table 1 in non-preemptive priority service

discipline and Table 2 of the non-priority service discipline are

the same but different waiting time distribution in each class.

That implies that the total waiting time is independent of the

service discipline but with different waiting time distributions.

From this table and Fig 5, the waiting time distributions of the

two classes under the non-priority are almost the same

therefore the class 2 colour overshadow that of class 1 in the

same Fig.

The overall performance of the non-preemptive priority and

the non-priority is shown in Fig 6. Here, the waiting time

distributions of all the classes have a close range when the

server utilization is small. This implies that the effect may not

be felt much when the consumers’ arrival rate is small. As this

grows, then Class 2 non pre-emptive priority (Class 2 np)

stays longer than both class 1 in non-preemptive priority

(Class 1np) and the two classes in the non-priority discipline.

As earlier said, the distributions of class 1non priority (class 1

npr) and class 2 non priority are the same and that account for

only seeing the colour of Class 2 (Class 2 npr) overshadowing

the class 2 non priority. We can deduce that a great caution is

required when introducing this policy in Cloud E-marketplace

with different service provisioning.

One better thing in this model is that there is a great

improvement in consumer waiting time in both Classes over

[17] where each service station is dedicated to each class

because no station is idle until when no arrival occur.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 71

Page 72: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

V. CONCLUSION

In this paper, we presented the performance evaluation of a

typical cloud e-marketplace modelled as two priority queue system. We studied the impact of consumers’ waiting time on

performance on the two classes. The results reveal that the

Class two priority has a longer waiting time more than the

Class one priority but the average waiting time remains the

same in the two priorities. Also the total waiting time is

independent of the service discipline. Though this we be of

good especially where the service provisioning are different

based on time and cost, this should be done with caution

because as , the expectations of the mean numbers of

consumers and waiting time of consumers in Class 2 queue

thereby leading to lower consumers’ dissatisfaction.

This work can be further improved . For instance, in the area

of setting up threshold mechanism to control the queue length.

REFERENCES

[1] A.O. Akingbesote, M. O. Adigun, J, Oladosu, E. Jembere

I.Kaseeram, “ The Trade-off between consumer’s satisfaction and

resource service level by e-market providers in e-market places,” in

International conference Electrical Engineering and Computer

Sciences (EECS) Hong Kong, pp. 395-404, Dec 2013.

[2] W. K. Chong, “Performances of B2B e-Marketplace for SMEs : The

Research Methods and Survey Results,” vol. 9, 2009.

[3] H. Goudarzi and M. Pedram, “Maximizing Profit in Cloud

Computing System via Resource Allocation,” 2011 31st Int. Conf.

Distrib. Comput. Syst. Work., pp. 1–6, Jun. 2011.

[4] A. O. Akingbesote, M. O. Adigun, J. B. Oladosu, and E. Jembere,

“A Quality of Service Aware Multi-Level Strategy for Selection of

Optimal Web Service.”

[5] H. Khazaei, “Performance Modeling of Cloud Computing Centers,”

no. January, 2013.

[6] H. Khazaei, J. Misic, and V. B. Misic, “Performance Analysis of

Cloud Computing Centers Using M/G/m/m+r Queuing Systems,”

IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 5, pp. 936–943,

May 2012.

[7] K. Popovi and . Hocenski, “Cloud computing security issues and

challenges.” pp. 344–349, 2010.

[8] http://mobiledevices.bout.com/od/additionalresources/a/Cloud-

omputing-Is-It-Really-All-That-Beneficial.htm. Accessed March

2014 .

[9] http:// www.rickscloud.com/how-performance-issues-impact-cloud-

adoption, Accessed March 2014.

[10] E. Pakbaznia and M. Pedram, “Minimizing data center cooling and

server power costs,” in Proceedings of the 14th ACM/IEEE

international symposium on Low power electronics and design -

ISLPED ’09, 2009, p. 145.

[11] F. Mustafa and T. L. McCluskey, “Dynamic Web Service

Composition,” in 2009 International Conference on Computer

Engineering and Technology, 2009, pp. 463–467.

[12] F. Kamoun, “Performance Analysis of Two Priority Queuing

Systems in Tandem,” vol. 2012, no. November, pp. 509–518, 2012.

[13] F. Kamoun, “Performance analysis of a non-preemptive priority

queuing system subjected to a correlated Markovian interruption

process,” Comput. Oper. Res., vol. 35, no. 12, pp. 3969–3988, Dec.

2008.

[14] K. Xiong and H. Perros, “Service Performance and Analysis in

Cloud Computing,” 2009 Congr. Serv. - I, pp. 693–700, Jul. 2009.

[15] X. Nan, Y. He, and L. Guan, “Optimal resource allocation for

multimedia cloud based on queuing model,” 2011 IEEE 13th Int.

Work. Multimed. Signal Process., pp. 1–6, Oct. 2011.

[16] J. Walraevens, B. Steyaert, and H. Bruneel, “Analysis of a discrete-

time preemptive resume priority buffer,” Eur. J. Oper. Res., vol.

186, no. 1, pp. 182–201, Apr. 2008.

[17] D. a. Stanford, P. Taylor, and I. Ziedins, “Waiting time distributions

in the accumulating priority queue,” Queueing Syst., vol. 77, no. 3,

pp. 297–330, Dec. 2013.

[18] D. Gross and C. M. Harris, “Fundamentals of queueing theory (2nd

ed.).,” Aug. 1985.

[19] P. B. M, P. S. K. P, and P. P. G. V, “Performance factors of cloud

computing data centers using M / G / m / m + r queuing systems,”

vol. 2, no. 9, pp. 6–10, 2012.

[20] K. Leonard, Queuing System: Vol. 3, Canada: John Wiley and son,

ch.1 pp.3-7, 1975.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 72

Page 73: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Comparative Analysis of Sparse Signal Recovery Algorithms based on Minimization Norms

Hassaan Haider #1, Jawad Ali Shah *, Usman Ali

#

#Department of Electrical Engineering, College of Electrical and Mechanical Engineering, National University of Science and Technology, Pakistan.

1 [email protected] *

Department of Electronic Engineering, Faculty of Engineering and Technology, International Islamic University, Islamabad, Pakistan.

Abstract—In conventional sensing modality, Nyquist sampling

theorem is followed as the minimum sampling rate. However, due to constraints e.g. slow sampling process, limited memory, and sensors cost, in some applications Nyquist sampling rate is difficult to achieve. When sampling rate is less than Nyquist sampling rate, aliasing artifacts occur in the recovered signal. Compressed Sensing (CS) is a modern sampling technique, where signal can be recovered faithfully even from fewer samples if signal/image of interest is sparse, which true as most signals/images are sparse in appropriate domain i.e. Wavelet transform, finite difference. Recovering sparse signal efficiently from compressively sampled data can be most challenging part in CS. The recovery problem is highly ill-posed underdetermined system of linear equations, so additional regularization constraints are required. As there can be infinite many solutions, therefore, finding best solution from few measurements becomes an optimization problem, where cost function is minimized. There are several reconstruction methods that exist in literature. These methods can be classified, based on the norms that are used in minimizing the objective function. This paper presents a comparati ve study of modern sparse signal recovery algorithms using different norms. Sparse signal recovery algorithms presented in this paper are Smoothed 𝒍𝟎, 𝒍𝟏magic and mixed 𝒍𝟏𝒍𝟐 norm based Iterative Shrinkage Algorithms (ISA) e.g. SSF, IRLS and PCD. All algorithms are tested for the recovery of sparse image. The performance measures used for objectively analysing the efficiency of algorithms are mean square error, correlation and computational time. Keywords —Sparse Signal Recovery, Compressed Sensing, Norms, Inverse Problem.

I. INTRODUCTION The sparse signals can be defined as the signals that have

total energy concentrated in very few components as compared to their d imensions [1]. Mathematically, the signal 𝒙 ∈ ℝn is 𝐾 -sparse if its ‖𝒙‖0 ≤ K ≪ n , where 𝑙0 is quasi-norm that can be computed as, ‖𝒙‖0 = #{j: xj ≠ 0} [2].

CS theory explo its sparsity in signals, which in turn allows faithful recovery of signal even if sampling rate is well below Nyquist sampling rate. The in formation rate in a signal is generally much less as compared to its bandwidth, which makes accurate signal recovery possible even from fewer samples [3]. There are many application areas for CS i.e.

Magnetic Resonance Imaging (MRI), where data acquisition modality is slow and expensive. MRI acquisition process using CS can potentially reduce the time and cost of MRI scanning [3].

In CS, every measurement is acquired with projection of actual signal to a test function 𝐚𝒊 [5]:

𝑦𝑖 =< 𝒙,𝐚𝑖 > = 𝐚𝑖T𝒙 , 1 ≤ 𝑖 ≤ 𝑚 ≪ 𝑛 or in matrix form 𝐲 = 𝐀𝐱 (1) where sensing matrix 𝐀:ℝ𝑛 → ℝ𝑚 reduces the dimensions to 𝑚 rows, composed of vectors 𝐚1T ,𝐚2T , … 𝐚𝑚T and set of measurements in vector 𝐲 ∈ ℝ𝑚 .

The sensing matrix 𝑨 must obey Restricted Isometry Property (RIP) in order to prevent informat ion in signal from distortion. Random Bernoulli or Gaussian matrices are example of matrices that follow RIP. The other condit ion for faithful recovery of signal is that there must be sufficient measurements i.e. 𝑚 ≥ 𝐾 log (𝑛/𝐾), then sparse signal could be recovered with very high probability ([6], [7]); where 𝑚 measurements are taken from a signal of length 𝑛 having 𝐾 non-zero elements.

The system of linear equations defined by equation (1), there may be many signals that would result in the same measurements, when 𝐀 is fu ll ranked. This poses a highly ill-conditioned problem, which requires additional constraints to find an appropriate solution [2].

The paper is organized in following sections:- Section-II contains literature review of recovery techniques based on norms. Section-III discusses different recovery techniques. Section-IV elaborates simulat ion results. Section-V concludes the paper based on algorithms performance.

II. NORMS BASED CLASSIFICATION OF RECOVERY METHODS

A. 𝑙2 Norm The minimum 𝑙2 norm based solution, also known as least

squares, gives the minimum total energy solution that has a unique solution and computationally tractable. The signal can be estimated analytically using (2). However, energy of solution is spread large number of elements, which results in a dense solution, unsuitable for sparse signals. 𝒙� = arg min

𝑥‖𝐲 − 𝐀𝐱‖𝟐𝟐 = 𝐀𝑻(𝐀𝐀𝑻)−𝟏𝒚 (2)

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 73

Page 74: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

B. 𝑙0 Norm The 𝑙0 norm based min imization of objective function

defined by (3) is the principled method for recovery of sparse signal. However, it is computationally combinatorial problem, where exhaustive search �𝑛𝐾� for nonzero entries of the reconstructed vector is required, thereby it is not suitable for practical applications. The cost function is non-convex and noise sensitive. 𝑙0 norm min imization make use of sparsity constraint for finding an estimate of a solution to problem in equation (1) with few nonzero entries. [7]. 𝒙� = argmin

𝑥‖𝐲 − 𝐀𝒙‖𝟐𝟐 subject to ‖𝒙‖0 ≤ 𝐾 (3)

Consequently, 𝑙0

C. 𝑙1 Norm

norm can be rep laced by 𝑙𝑝 norm, where 𝑝 ∈ (0,1]. The 𝑙0 norm can be approximated by smooth functions i.e. ∑ log (1+∝ 𝑥𝑗2)𝑗 , ∑ (1 − exp (∝ 𝑥𝑗2))𝑗 [9]. Algorithms FOCUSS [9] and Smoothed 𝑙0 [10] are examples of 𝑙0 based optimizat ion methods.

The 𝑙1 norm based cost function is defined by (4), which is convex and promotes sparsity in estimated solution, whereas, 𝑙0 norm in (3) is generally not tractable and non convex. Therefore, we can replace 𝑙0 norm by 𝑙1 norm to remodel the problem in (3) by: 𝒙� = 𝐚𝐫𝐠𝐦𝐢𝐧

𝒙‖𝐀𝒙 − 𝐲‖𝟐𝟐 subject to ‖𝒙‖1 ≤ 𝜀 (4)

where 𝜀 is the positive relaxat ion constant. The LASSO algorithm uses the formulation in (4) [11]. Basis Pursuit (BP) and 𝑙1 magic also use equality constraints in (5) to recast the optimization problem in (1) as [12]: 𝒙� = 𝐚𝐫𝐠𝐦𝐢𝐧

𝒙‖𝒙‖𝟏 subject to 𝐲 = 𝐀𝒙 (5)

Interior point methods and projected gradient methods are applicable on problems defined by (4) and (5).

D. 𝑙1𝑙2 Norm The Iterative-Shrinkage algorithms (ISA) are another

popular methods that remodel the problem in (1) by minimizing the mixed 𝑙1𝑙2 norms [15]: 𝒙� = 𝐚𝐫𝐠𝐦𝐢𝐧

𝒙(‖𝐲 − 𝐀𝐱‖𝟐𝟐 + 𝜆 ‖𝐱‖𝟏) (6)

The parameter 𝜆 ≥ 0 is used as regularization parameter that controls the sparsity in estimated solution. For larger values of 𝜆 , estimated solution with be very sparse and for smaller value estimated solution will be dense. The steepest descent and conjugate gradient methods are not efficient for large scale application. This problem is amicably solved by using ISA [8].

III. SPARSE SIGNAL RECOVERY ALGORITHMS The sparse signal can be reconstructed from various

methods as discussed in previous section. Each algorithm may be suitable for certain types of application area and requirements.

A. 𝑙1 Magic 𝑙1 Magic solves the equality constraint problem defined in

(5) using linear programming (LP) and second order convex programming (SOCP). LP makes use of primal dual method, whereas SOCP uses log barrier methods for solving the ill-constraint problem posed in (1). Details of 𝑙1 magic algorithm can be found in [20].

B. Smoothed 𝑙0 Norm Smoothed 𝑙0 norm based method is very effective in

solving atomic decomposition problem defined by (1). Smoothed 𝑙0 algorithm approximates the 𝑙0 norm by a smoothed Gaussian function 𝑓(𝑥) = 𝑒−𝑥

2/𝜎2 ,where 𝜎2 is the variance of 𝒙 . This approximation reduces the noise sensitivity issue that can occur in 𝑙0 norm. The other advantage is that gradient based methods can be implemented for finding optimal solution. The steepest ascent method is applied to find the maximum value o f Gaussian function. This algorithm is further elaborated in [10].

C. Mixed 𝑙1𝑙2 based Iterative Shrinkage Algorithms Iterative-Shrinkage algorithms (ISA) recast the problem in

(1) by mixed 𝑙1𝑙2 norm based unconstraint optimizat ion problem defined by (6). Iterative Shrinkage algorithms involve Shrinkage function 𝒙𝑜 = 𝑺𝝀 (𝜶) , where the values below threshold value T, are set to 0 (𝑺𝝀 (𝜶) = 𝟎 for |𝜶| < T) and the values greater than threshold T are shrunk [8]. The other common feature of ISA that these algorithms involve multip licat ion of sensing matrix 𝐀 and its transpose (𝐀𝑻). ISA are popular due their simplicity and effectiveness in solving high dimensional signals using (6) [8]. The variants of ISA may differ in their shrinkage method or their objective function. The SSF [16], FISTA [17], PCD [9] and IRLS [18] algorithms belong to Iterative-Shrinkage family of algorithms.

1) Separable Surrogate Functional (SSF): SSF algorithm belongs to the family of ISA [17]. The function in (7) is added to objective function in (6).

𝑑(𝒙, 𝒙𝟎) = 𝑐2‖𝒙 −𝒙𝟎‖22 −

12‖𝑨𝒙 − 𝑨𝒙𝟎‖22 (7)

The value of 𝑐 is chosen to guarantee the convergence of estimated solution by making function 𝑑(𝒙, 𝒙𝟎) strictly convex i.e. 𝑐𝑰 − 𝑨𝑻𝑨 > 0 . This condition is satisfied only if 𝑐 > ‖𝑨𝑻𝑨‖2 = 𝜆𝑚𝑎𝑥(𝑨𝑻𝑨). Recently, it has been shown in [19] that 𝑐 > 0.5𝜆𝑚𝑎𝑥(𝑨𝑻𝑨) guarantee the convergence of SSF algorithm. The object ive function can be rewritten as:

ℎ(𝒙) =12‖𝐲 − 𝐀𝐱‖𝟐𝟐 + 𝜆‖𝒙‖𝟏)

+ 𝑐2‖𝒙 − 𝒙𝟎‖22 −

12‖𝑨𝒙 − 𝑨𝒙𝟎‖22 (8)

The Iterative-Shrinkage is defined in (9). 𝒙𝑘+1 = 𝑺 �1

𝑐𝑨𝑻(𝒚− 𝑨𝒙𝒌) + 𝒙𝒌� (9)

2) Iterative Reweighted Least Square (IRLS): IRLS algorithm is another ISA, used for many optimizat ion

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 74

Page 75: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

problems. IRLS is popular method to find maximum likelihood function for linear models. It min imizes weighted residual rather than 𝑙2 norm [18]. In equation (6), ‖𝒙‖1 is calculated by 0.5𝒙𝑇𝑾−𝟏(𝒙)𝒙 , where 𝑾(𝒙) is the diagonal matrix containing 𝑾[𝑘, 𝑘] = 0.5𝒙[𝑘]2/‖𝒙‖1 values in its diagonal. We can rewrite object ive function in (6) by: 𝑓(𝒙) = 0.5‖𝒚 − 𝐀‖22 + 0.5𝜆𝒙𝑇𝑾−1(𝒙)𝒙 (10) As simple IRLS performance is poor for high dimensional signals, the modified algorithm was proposed in [18], resulting in ISA by simply subtracting and adding c . x from equation (8). The resulting Iterative-Shrinkage process can be defined below as:

𝒙𝑘+1 = �𝜆𝑐𝑾

−1(𝒙𝑘) + 𝑰�−1�

1𝑐 𝐀

𝐓𝐲 −1𝑐 (𝐀𝐓𝐀 − c𝐈)𝐱𝑘�

= 𝑺.�1𝑐 𝐀𝐓(𝐲 − 𝐀𝐱𝑘) + 𝐱𝑘� (11)

Where 𝑆 is defined as:

𝑺 = �𝜆𝑐𝑾

−1(𝒙𝑘) + 𝑰�−1

= �𝜆𝑐 𝑰+𝑾(𝒙𝑘)�

−𝟏

𝑾(𝒙𝒌)

where 𝑐 ≥ 1 is the constant chosen as 𝑐 > 𝜆𝑚𝑎𝑥(𝐀𝐓𝐀)/2 to ensure convergence of estimated solution to (1) and 𝜆𝑚𝑎𝑥 R

is the maximum Eigen value of matrix 𝑺.

3) Parallel Coordinate Descent (PCD): PCD algorithm is defined in [15], which was developed from coordinate descent algorithm. In (6), in itially each entry in 𝒙 is updated separately using coordinate descent, and then 𝑛 such steps on each entry are combined with guaranteed convergence.

Let 𝒙0 be the current solution, the updating equation can be defined below: 𝑔(𝑞) = 1

2‖𝒚 − 𝑨𝒙𝟎 −𝒂𝑖 (𝒒 − 𝑥0[𝑖])‖22 + 𝜆 𝜌(𝑞) (12)

where 𝒂𝑖 is the 𝑖𝑡ℎ vector of 𝐀. The 𝒂𝑖 (𝒒 − 𝑥0[𝑖]) is used to update the each value. The optimal 𝑞 can be determined using (13).

𝑞𝑜𝑝𝑡 = 𝑆 �1

‖𝒂𝑖‖22 𝒂𝑖

𝑇(𝒚 −𝑨𝒙𝟎) + 𝑥0[𝑖]� (13)

For h igh dimensional signals, this approach of handling each vector separately is prohibitive. Therefore, 𝑛 such steps are combined in (14).

𝒗0 = �𝒆𝒊.𝑛

𝑖 =1

𝑆 �1

‖𝒂𝑖‖22 𝒂𝑖

𝑇(𝒚 − 𝑨𝒙𝟎) + 𝑥0[𝑖]� (14)

where vector 𝒆𝒊 has value of one in 𝑖𝑡ℎ entry and zeros in rest of the entries, the equation (14) can be further simplified as:

𝒗0 = 𝑆(𝑑𝑖𝑎𝑔(𝐀T𝐀)−1𝐀T(𝐛 − 𝐀𝒙0) + 𝒙0) (15) For guaranteed convergence of 𝑛 coordinate descent steps,

Line Search (LS) is used. The iterative equation becomes: 𝒙𝑘+1 = 𝒙𝑘 + 𝜇(𝒗𝒌 − 𝒙𝒌) =𝒙𝑘 + 𝜇(𝑆(𝑑𝑖𝑎𝑔(𝐀T𝐀)−1𝐀T(𝐲 −𝐀𝐱𝑘) + 𝐱𝑘) − 𝐱𝑘) (16)

where 𝜇 is the constant chosen for convergence of LS algorithm. The resultant one dimensional function to optimize is given by (17). Which only requires multip lication of 𝐀 by 𝒙𝑘and 𝒗𝑘.

ℎ(𝜇) =12�𝐲 − 𝐀�𝒙𝒌 + 𝜇(𝒗𝒌 − 𝒙𝒌)��2

2

+𝜆𝟏𝑇𝜌(𝒙𝒌 + (𝒗𝒌 − 𝒙𝒌)) (17)

IV. SIMULATION RESULTS AND DISCUSSION This paper uses random Gaussian sensing matrix

𝐀 ∈ ℝ550×1024 where 𝑚 = 550 represents the number of observations and 𝑛 = 1024 as length of the estimated sparse signal. The matrix 𝑨𝒎×𝒏 is generated by Gram-Schmidt procedure. All algorithms are tested on image 𝐗 ∈ ℝ32×32 . The steepest ascent iterations in smoothed 𝑙0 algorithm is set to three. The parameter 𝜆 is set to 0.007 for IRLS and SSF algorithms, while 0.085 fo r PCD. Each of the Iterative-Shrinkage algorithms is iterated 500 times.

The performance measures used for objectively analyzing the various algorithms are mean square error (MSE), correlation between orig inal and recovered image, and computational time for each algorithm.

MSE can be calculated using (18),

𝑀𝑆𝐸 =�𝐗�−𝐗𝟎�2

2

‖𝐗𝟎‖22 (18)

where 𝑿� is the estimated image recovered from recovery algorithm and 𝑿𝟎 represents the original image. The correlation is another statistical parameter, that can be used to show similarity between the original image 𝑿0 and the recovered image 𝑿�. If correlat ion is close to one that means, the two images are closely matched. Correlation can be calculated using (19).

𝜌𝑿� ,𝑿𝟎 = 𝑐𝑜𝑣(𝑿� ,𝑿𝟎)𝜎𝑿�𝜎𝑿0

(19)

where 𝑐𝑜𝑣�𝑿�,𝑿𝟎� is the covariance between 𝑋� and 𝑋0 , and 𝜎𝑿� 𝑎𝑛𝑑 𝜎𝑿𝟎 are the standard deviations in the estimated images 𝑿� and 𝑿𝟎 respectively. The computational t ime was calculated using MATLAB 2012b installed on 2.4GHz Core i5 processor with 6GB RAM.

Fig. 1 shows the recovered images from d ifferent sparse signal recovery algorithms and their comparison with orig inal sparse image. Image recovered from Least Square method shows its failure to recover sparse image with very few p ixels with zero amplitude. The image recovered using 𝑙1 magic technique is very accurate in recovering sparse image. The image recovered from Smoothed 𝑙0 method performed better than Least Square but has visible difference from orig inal image. The SSF algorithm recovers the sparse image very accurately with no visib le d ifference from orig inal image. The IRLS algorithm recovered the non-zero p ixels accurately, but some of the p ixels have minor errors in terms of recovered amplitudes of pixel. Recovered image from PCD has performed better than Smoothed 𝑙0 algorithm in recovering support of the image but has significant error visib le in recovered image.

Table I shows the comparison of algorithms based on MSE, correlation and computational time. Least Square method has highest MSE and correlat ion, while it takes min imum computational time due to its analytical solution. 𝑙1 magic has the lowest MSE and best correlation, while its execution time is relatively high which can be problemat ic for large scale applications. Smoothed 𝑙0 algorithm has better MSE and correlation than Least Square method but it takes more

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 75

Page 76: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Fig. 1: Recovered images from various algorithms

TABLE I PERFORMANCE COMPARISON OF DIFFERENT ALGORITHMS

Algorithm MSE Correlation Computational Time (sec)

Least Square 4.71 exp (-01) 0.70002 0.063 𝑙1 Magic 3.16 exp (-12) 1.00000 0.507 Smoothed 𝑙0 1.35 exp (-02) 0.99241 0.320 SSF 4.09 exp (-04) 0.99994 0.334 IRLS 1.52 exp (-03) 0.99979 0.334 PCD 5.64 exp (-02) 0.98860 0.600

Fig. 2: Minimization of ISA Objective function iteratively

Fig. 3: MSE of Iterative-Shrinkage Algorithms at each iteration

computational time than Least Square method and lesser time than 𝑙1 magic. SSF algorithm achieves very low MSE and high correlat ion between recovered and original image. Its computational time is almost same as that of Smoothed 𝑙0 but it achieves much better accuracy than Smoothed 𝑙0 algorithm. IRLS algorithm computational time is almost same as that of SSF, but its MSE is more than SSF and Correlat ion is minutely below SSF. The PCD algorithm has higher MSE than Smoothed l0 method and its correlation is better than Least Square method but still less than Smoothed 𝑙0 algorithm. PCD is the slowest method among all other algorithms.

Fig. 2 elaborates, how Iterative-Shrinkage algorithms minimize the objective function iteratively. The IRLS method

minimizes the cost function in itially fast but held up without reaching the optimal solution. PCD algorithm fails to minimize the cost function effectively. The SSF algorithm is slow to converge for initial iterations but afterwards it achieves the minimum value of objective function.

Fig. 3 depicts the decrease in MSE of Iterative-Shrinkage algorithms iterat ively. There is complete analogy between cost function minimizat ion and MSE achieved by these algorithms. PCD method has highest mean square error, which can be seen from its failure to minimize cost function. IRLS is fast to minimize MSE for init ial few iterations but slows down afterwards. The SSF algorithm minimizes MSE slower than IRLS init ially, but finally ach ieves best MSE compared to PCD and IRLS algorithms.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 76

Page 77: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

V. CONCLUSION Recovery of sparse images from compressively sampled

data is a challenging task. There are various methods in literature to recover sparse signals/images. Each algorithm has its pros and cons for the specific application area and requirements. The tradition approach based on 𝑙2 norm has been proven un-effective in recovering sparse images. The 𝑙1 based sparse signal recovery methods are very effective, but these methods have higher computational cost for large scale application. The 𝑙0 norm approximat ion based solutions can also be very effective for certain type of applications. The Mixed 𝑙1𝑙2 norm based ISA are attractive due to their simplicity in implementation. The less computational time makes these algorithms more suitable for high dimensional applications.

ACKNOWLEDGMENT We would like to thank NUST, (CEME) for supporting

this research work by provid ing grants for carrying out research and publishing the research work.

REFERENCES [1] Needell, Deanna, Joel Tropp, and Roman Vershynin. "Greedy signal

recovery review." In

[2] Tropp, Joel A., and Stephen J. Wright. "Computational methods for sparse solution of linear inverse problems."

Signals, Systems and Computers, 2008 42nd Asilomar Conference on, pp. 1048-1050. IEEE, (2008).

Proceedings of the IEEE

[3] Candès, Emmanuel J., and Michael B. Wakin. "An introduction to compressive sampling."

98, no. 6 (2010): 948-958.

Signal Processing Magazine, IEEE

[4] Lustig, Michael, David L. Donoho, Juan M. Santos, and John M. Pauly. "Compressed sensing MRI."

25, no. 2 (2008): 21-30.

Signal Processing Magazine, IEEE

[5] Romberg, Justin. "Imaging via compressive sampling [introduction to compressive sampling and recovery via convex programming]."

25, no. 2 (2008): 72-82.

IEEE Signal Processing Magazine

[6] Mendelson, Shahar, Alain Pajor, and Nicole Tomczak-Jaegermann. "Uniform uncertainty principle for Bernoulli and subgaussian ensembles."

25, no. 2 (2008): 14-20.

Constructive Approximation

[7] Baraniuk, Richard G. "Compressive sensing."

28, no. 3 (2008): 277-289.

IEEE signal processing magazine

[8] Elad, Michael.24, no. 4 (2007).

[9] Gorodnitsky, Irina F., and Bhaskar D. Rao. "Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm."

Sparse and redundant representations: from theory to applications in signal and image processing. Springer, 2010.

Signal Processing, IEEE Transactions on

[10] Mohimani, G. Hosein, Massoud Babaie-Zadeh, and Christian Jutten. "Fast Sparse Representation based on Smoothed ℓ0 Norm." In

45, no. 3 (1997): 600-616.

[11] Wipf, David P., and Bhaskar D. Rao. "Sparse Bayesian learning for basis selection."

Independent Component Analysis and Signal Separation, pp. 389-396. Springer Berlin Heidelberg, 2007.

Signal Processing, IEEE Transactions on

[12] Chen, Scott Shaobing, David L. Donoho, and Michael A. Saunders. "Atomic decomposition by basis pursuit ."

52, no. 8 (2004): 2153-2164.

SIAM journal on scientific computing

[13] Needell, Deanna, and Roman Vershynin. "Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit."

20, no. 1 (1998): 33-61.

Foundations of computational mathematics

[14] Needell, Deanna, and Joel A. Tropp. "Cosamp: iterative signal recovery from incomplete and inaccurate samples." Communications of the ACM

9, no. 3 (2009): 317-334.

[15] Elad, Michael, Boaz Matalon, Joseph Shtok, and Michael

Zibulevsky. "A wide-angle view at iterated shrinkage algorithms." In

53, no. 12 (2010): 93-100.

[16] Daubechies, Ingrid, Michel Defrise, and Christine De Mol. "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint." Communications on pure and applied mathematics

Optical Engineering+ Applications, pp. 670102-670102. International Society for Optics and Photonics, 2007.

[17] Beck, Amir, and Marc Teboulle. "A fast iterative shrinkage

thresholding algorithm for linear inverse problems."

57, no. 11 (2004): 1413-1457.

SIAM Journal on Imaging Sciences

[18] Adeyemi, Tony, and M. E. Davies. "Sparse representations of images using overcomplete complex wavelets." In

2, no. 1 (2009): 183-202.

[19] Combettes, Patrick L., and Valérie R. Wajs. "Signal recovery by proximal forward-backward splitt ing."

Proc. IEEE SP 13th Workshop Statistical Signal Processing, pp. 17-20. 2006.

Multiscale Modeling & Simulation

[20] Candes, Emmanuel, and Justin Romberg. "l1-magic: Recovery of sparse signals via convex programming." URL: www. acm. caltech. edu/l1magic/downloads/l1magic. pdf 4 (2005).

4, no. 4 (2005): 1168-1200.

Copyright © 2014 WCST-2014 Technical Co-Sponsored by IEEE UK/RI Computer Chapter 77

Page 78: €¦ · Contents Page . Welcome Message 3 Contents Page 4 Executive Committee 6 Technical Programme Committees 6 . Keynote Speakers . 8 Keynote Speaker 1: Professor John Barrett

Many thanks for your participation and we hope to see you next year…!

International Conference for Internet

Technology and Secured Transactions

(ICITST-2015)

www.icitst.org

World Congress on Internet Security

(WorldCIS-2015)

www.worldcis.org

World Congress on Sustainable

Technologies (WCST-2015)

www.wcst.org

Have a great trip back home….!!!