40
IEEE DTU Student Branch iota i

IOTA 2013

Embed Size (px)

DESCRIPTION

Annual Technical Journal, brought to you by IEEE DTU Student Branch

Citation preview

Page 1: IOTA 2013

IEEE DTU Student Branch

iota i

Page 2: IOTA 2013

iotaii

IEEE DTU Student Branch

Page 3: IOTA 2013

IEEE DTU Student Branch

iota i

Editors’ Note

Albert Einstein once quoted, “The eternal mystery of the world is its comprehensibility”. The connotation of this quo-tation heavily depends on the perspective of the reader. One would associate it with the omnipresence of god and other would rely on the depths of philosophy. But as engineering graduates, we would reckon on science and technology as the ultimate weapon to unravel all the incomprehensible mysteries of the world. Being technocrats, our answer to all the questions that have intrigued our brain, would be intimately related to technology. And with the tremendous progress that science is making today, our proficiency to justify all the happenings of our vicinity is enhancing day by day. And with this pace of development, that day is not very far when there will remain no scientific conundrum unsolved. On that day, man would stand on the summit of his virtues with a great responsibility of passing on this legacy along with a beautiful world to his next generation.In this age of development, it would not be wrong to say that being technically unaware is a cardinal’s sin. One has to be alert about all the recent discoveries, research and experimentation that are going on in every field of science and engineering. One has to have knowledge about gadgets varying from a simple application of android operating system to the basics of large hadron collider. And with this ambition, we present to you the annual technical journal of IEEE DTU., the IOTA.As you flip through the pages of the journal you will find about research work of students from a variety of branches, ranging from electrical to electronics to computers to civil to environmental. With the sole objective of escalation of scientific and technical temper in the campus of DTU, we the members of the editorial team, put forward IOTA, 2013, a journal that blends technology with the life of a lay man.

Team PublicationIEEE DTU Student Branch

Page 4: IOTA 2013

iotaii

IEEE DTU Student Branch

Page 5: IOTA 2013

IEEE DTU Student Branch

iota iii

CONTENTSVICE CHANCELLOR’S MESSAGE iv

PRO-VICE CHANCELLOR’S MESSAGE v

BRANCH COUNSELLOR’S ADDRESS vi

CHAIRMAN’S ADDRESS vii

A TOUR OF TECHWEEK viii

ROLE OF RENEWABLE ENERGY IN INDIAN POWER SECTOR 1

REALIZATION OF CDTA BASED FREQUENCY AGILE FILTER 3

DNA CRYPTOGRAPHY USING FINITE FIELD ARITHMETIC 7

COLLISION AVOIDANCE FOR MARINE VEHICLES USING FUZZY LOGIC 9

SUSTAINABLE MODEL OF MUNICIPAL SOLID WASTE MANAGEMENT IN DELHI -ENERGY POTENTIAL AND CHALLANGES

12

OPTIMIZED SOLUTION TO NP-COMPLETE ‘MAXIMUM CLIQUE PROBLEM’ USING GENETIC ALGORITHMS

14

HIGH PERFORMANCE CONCRETE 17

POPULATION ESTIMATION AND TRACKING OF WHALES AND DOLPHINS - IT’S ALL ABOUT ACOUSTICS AND THEIR CLICKS!!

19

QUADROTOR : AN OVERVIEW 20

AN INTRODUCTION TO POWER DISTRIBUTION USING S.C.A.D.A. 22

ADVANCED MEDICAL IMAGE VISUALIZATION TOOLS FOR BIOMEDICAL ENGI-NEERS

24

A CASE FOR PHARMACOGENOMICS IN MANAGEMENT OF CARDIAC ARRHYTH-MIAS

26

IEEE DTU STUDENT COUNCIL 2013-14 28

Page 6: IOTA 2013

iotaiv

IEEE DTU Student Branch

Vice Chancellor’s AddressIt is with immense pleasure that I release yet another edition of IOTA, the annual technical journal of the IEEE-DTU student branch.With the rapid strides that the present world is making in the path of progress, being technically smart, managerially active and socially alert is of utmost importance. Delhi Technological University, car-rying forth the legacy of Delhi College of Engineering, which has a glorious history of more than 70 years, takes this as its motto and works effortlessly and tirelessly in this direction. The magnificence and grandeur of DTU has been honoured time to time by various government and non government organizations and leading dailies. Recently, in July 2013, DTU won the title of “Emerging Technical University of the Year” at the Annual Study World Excellence Awards organised by the Study World Education Confluence, 2013. The award was presented by Dr. Shashi Tharoor, honourable minister of state for Human Resource Development. Outlook MRDA survey 2013 placed DTU at the eighth position among the top engineering institutions of the country and ranked DTU 1st in campus placements. Students have brought glory to this institution by winning various prestigious foreign scholarships and awards. Ex-tended and enhanced laboratory and theoretical work is being conducted in almost all the arenas of engineering. The first solar car developed by DTU students was flagged by The Honourable President of India in September 2012. The International Symposium on Standards in Engineering and Technology 2012 was organized by IEEE DTU in association with IEEE USA in the Delhi Technological University campus in the month of October. The symposium focussed on the Development of Standards in the fields of Smart Grid, Nuclear Power and Computer Technology. A national conference on biotechnology and biomedical engineering was organised by the Department of Biotechnology. Students brought fame to DTU in the Texas Instruments Electrical Analog Design Contest. And the list of such glories continues.Sharing my thoughts in the annual technical journal of IEEE-DTU is indeed a great pleasure. Holding such an informa-tive magazine in my hands gives me the content that IEEE-DTU has unfailingly carried forward its legacy and fulfilled its duty to allow DTU students to fly free in the wide space of technical advancement. Wishing the students at the Delhi Technological University a bright and shining future.

Prof. P.B. SharmaPh.D (B’ham, U.K.), FIE, FAeroS, FWAPSVice ChancellorDelhi Technological University

Page 7: IOTA 2013

IEEE DTU Student Branch

iota v

Pro-Vice Chancellor’s AddressI am indeed very happy that DTU is bestowed with the powerful presence of a world-renowned profes-sional society like IEEE, which is a major inspiration to our younger minds. The vast body of knowl-edge which currently exists in every given field of engineering and the rapidly advancing frontiers of knowledge require that professional engineers are in regular touch with their counterparts through professional societies. Most of the students will definitely be able to grasp different notions of technology through the Tech-week, which is the first among the plethora of activities organized by IEEE DTU student branch. IEEE DTU regularly organizes several technical and non-technical workshops so that the students increase their proficiency and creativity. The Special Interest Groups (SIGs) educate the students in various fields of technology. The students put their newly learnt skills to use during the annual TechFest of IEEE DTU-TROIKA. The event oversees the members’ managerial and technical abilities being sharpened with dexterity and deftness.IOTA is the Annual Technical Journal of IEEE-DTU Student Branch. It demonstrates the capability of stimulating its readers to advance their technical knowledge, expertise and communication skills. It contains research articles from teachers and students working in various fields. I would like to congratulate the complete team of IOTA for making great efforts in bringing together such a wonderful Journal.I wish the students of IEEE DTU Student Branch the very best of luck in all their ventures.

Prof. Moin UddinPro-Vice ChancellorDelhi Technological University

Page 8: IOTA 2013

iotavi

IEEE DTU Student Branch

Branch Counsellor’s AddressIEEE DTU student branch has always been the frontrunner, when it comes to imparting technical ex-pertise along with managerial acumen. DTU has stood as a witness to the meteoric rise of the society from being just one amongst many to being the best in many regions. IEEE DTU has always helped students to explore new arenas in their quest for knowledge, has encouraged new talent to come up and has lived up to the expectations with which a young student steps into an international society.IEEE DTU has always provided students with a platform to interact with the corporate world and create bonds that last for a lifetime. It gives them a base to work their skills, giving them opportunities and international exposure in many arenas and has always created a family of students who are ever ready to help each other.And maintaining our standards, this year, we aim to make our accolades increase exponentially. Therefore, to make students aware of opportunities available and to impart technical knowledge to them, IEEE DTU, once again has come up with its annual technical magazine, IOTA.IOTA is one such publication that helps create a base for ideas that will prosper into statues of technology that will further serve the mankind. It pushes out the many latent talents hidden in the crevices of the university, deep into their research or holed up in their laboratories. Team IOTA brings to light, the matter they have been working on and gives them a platform to showcase their work. As for students who are new to technology and machines , IOTA bridges the gap between them by offering an insight into this world. Going through the pages of IOTA one would get surprised and awed at the brain pool in university and the talent housed in its campus.Last but not the least, a great applause for the editorial team for all their efforts in collecting the articles and putting them together to present to you this brilliant piece of work.

Dr. S. InduBranch Counsellor, IEEE DTU Student Branch

Page 9: IOTA 2013

IEEE DTU Student Branch

iota vii

Chairman’s AddressOn behalf of IEEE DTU, I welcome all members, new and old, to witness a period which will see a lot of activities and provide everybody with a chance to explore their interests and excel on the professional front.IEEE DTU has an ubiquity all-round the year through its never-ending list of events which attempts to provide a plat-form to the students where they can decide the direction of their career. Techweek 2013 will epitomize our efforts of bridging the gap between the curriculum and the dynamic world of technology. The aim of Techweek is to equip the students with the knowledge of some of the available options before them through a series of workshops. Most of the students will definitely be able to grasp different notions of technology through the Techweek. The Special Interest Groups (SIGs) will elaborately educate the students in various fields of technology like Embedded systems and Robotics, Matlab and Digital Image Processing, Programming, Web-Development, Solidworks, Graph-ic-Designing, 3-D Animation, Currentex and Finance along with workshops by the Power and Energy Society(PES) and Women in Engineering(WIE). Repertoire, our techno-managerial fest, scheduled for the month of October, will be a delight for all those students who have a knack for quizzing and business. TROIKA-the pinnacle of our activities saw great success last year, with the addition of two new events. This year, we have elaborate plans of taking it to newer heights, which have never been scaled before.IOTA, our annual technical publication, comes in a bigger and better edition this time. It has the ability to inspire the readers with its articles , which tell about the of the technical knowledge and research expertise of the teachers and students of our coveted University. Here, I take the opportunity to thank Prof. P.B.Sharma, Prof. Moin Uddin and our Branch Counselor Dr. S.Indu for their constant support and guidance, which motivated us in our endeavours to bring the mission and vision of IEEE DTU to fruition.Last but not the least, I would like to acknowledge the dedicated spirit and hard work of the entire IEEE DTU family, which made all these things possible.

Rishi PandeyChairman, IEEE DTU Student Branch

Page 10: IOTA 2013

iotaviii

IEEE DTU Student Branch

A TOUR OF TECHWEEK!!Web development

Decipher the technical jargon behind those codes and software’s and the secret behind what you see on a website. Watch and learn web development in this interactive session and soon you will be able to develop your own personal site. Need we say more?

3-D Animation A visual treat it is and as simple as it looks, it is deceptively easy too. 3D animation workshop showcases the genius of IEEE animation storehouse and helps provide an introduction to animation enthusiasts who plan to further their skills as an animator. Make that squiggly line dance to your tunes and jumble up everything to create your very own animation movie!

Embedded Systems

A word every hardware enthusiast is familiar with. Embedded system and as snazzy as it sounds, it is a treat for all who wish immerse themselves in the world of sensors and microchips.

Graphic Designing

Photoshop is a name every graphics enthusiast is familiar with. In this workshop, learn the nuances of making your very own personal graphic studio, Tweak those photos you have, make posters and designs to the envy of the best artists. With a simple click here and a click there liken yourself to the likes of Picasso and Van gogh. Learn the effects that make an image stand out and dazzle everyone with your skills. And the result is that you can then boast about your skills in graphic designing and carve your way into the huge world of designing where you can land up in web designing, poster making, and magazine designing and lots more!

Programming

The programming workshop deals with the fundamentals of C/C++ and simple coding. This is a big thumbs up for all coding enthusiasts as well as for beginners seeking to take a plunge into the world of computer programming.

Solidworks

And for all the mechanical/civil/automotive guyz out there, we have something in store for you too. Knowledge of

Solidworks gives you an edge over others in your branch. In this workshop you are introduced to the solidworks software. You can design your own machine, visualize all its view, and simulate it. So don’t miss the opportunity!

Robotics

The name itself is enough. After seeing all those Hollywood movies the first thing gets in your mind being in an engineering college is to make a robot of your own. In this workshop we tell you from the scratch, how to go about doing this. So Get Set Ready GO.

MATLAB and Digital Image Processing

Have you ever wondered how exactly does a robot see? What is it that happens behind the cameras in those deep lenses of eyes? Digital Image Processing is what it takes to unravel the mystery. This workshop, clubbed with MATLAB helps explain everything that you need to know about image processing. Am must for all hardware geeks and want to be geeks!

WIE

WIE is a student-affinity group of IEEE. It organizes workshops, fun-events and spreads awareness about the pivotal role of women in engineering . Here’s your chance to be a part of WIE and lend a hand to the cause of women upliftment in the society. Boys are also most welcome to be a part of WIE!!

PES

The Power and Energy Society, a part of IEEE DTU, focuses on sharing and promoting the latest progress made in this sector.The workshop will commence with a presentation which will elucidate the objective of the society, its history and the plethora of events it organizes. The main motto of the presentation will be to give the freshers an overview of the society. The presentation will follow a quiz called the WATTACK which tries to attract all those people who love to intrigue their brain with electrical circuits. The myriad of participation in WATTACK, 2012 has raised the standard of the quiz to the zenith beyond imagination.

Page 11: IOTA 2013

IEEE DTU Student Branch

iota 1

Role of Renewable Energy in Indian Power Sector

Dr. M. RizwanDepartment of Electrical Engineering, Delhi Technological University

In the last six decades, India’s energy use has increased 16 times and the installed energy capacity by 84 times. Even though, India is facing the problem of power deficit. In or-der to meet the requirements, huge amount of power is to be added in the present power sector. As per the Ministry of Power data, the installed power generation capacity of India as on 31st July 2013 was 226960 MW, peak power shortage of around 12.9% during 2011-12. In addition, the demand of electricity is increasing due to increased popu-lation, urbanization and comfort level of the peoples. This indicates that India’s future energy requirements are going to be very high. The losses due to transmission and distri-bution alongwith power theft in rural and urban area are the major concern of the present power system. Keeping in view of the aforesaid, an intelligent and reliable pow-er system is required which prevents the power theft and transmit the power at maximum efficiency. In addition dis-tributed power generation would be the option of future power system. In distributed power system, the power is generated and utilized locally instead of sending it to re-gional or national grid. In this scheme the cost associated with transmission and interconnected grid system may be reduced and consumer gets the reliable power supply. Al-though the cost of power generated from renewable en-ergy resources is higher as compared to the present con-ventional sources but it would reduced in the near future. The total demand for electricity in India is expected to cross 950,000 MW by 2030. Ideally India has to plan for 215000 MWs of power from renewable energy sources by 2030. The current installed capacity of thermal power (as on 31st July 2013) was 153848 MW which is 67.8% of total installed capacity. Current installed capacity of coal based thermal power is 132288.39 MW which comes to 58.28% of total installed base and current installed base of gas based thermal power is 20359.85 MW which is 9% of total installed base, while oil based thermal power is 1,199.75 MW which is 0.53% of total installed base. In addition, 39623.4 MW of power is generated through large hydro and 4780 MW and 28708.95 MW of power is gener-ated from nuclear energy and renewable energy resources respectively. The graphical representation of Indian power sector as on 31st July 2013 is presented in Fig. 1.India is venturing very fast into renewable energy (RE) resources like wind and solar. Solar has great potential in India with its average of 300 solar days per year. In addi-tion, the wind energy, hydro and biomass are also available in huge quantity. There is need to exploit these resources

to meet the future energy requirement. As renewable en-ergy is concerned, the current installed base of renewable energy is 28708.95 MW which is around 12.65% of total installed base. The power generation through wind is 19565 MW; small hydro contributes 3686.25 MW while the power generated from biomass and solar photovoltaic technology is 3698.31 MW and 1759.44 MW respectively. Although the potential of above mentioned resources is huge and this could be suffi-cient to meet the future requirements of the country.

Fig. 1 Installed Power Capacity (MW) of India

As per the MNRE statistics, the power generation through solar photovoltaic technology has increased 50 times during the last two years only. However, the Government is planning to add 20000 MW of power through solar energy only under Jawaharlal Nehru National Solar Mission (JNNSM). The National Solar Mission is a major initiative of the government of India and state governments to promote ecologically sustainable growth while addressing India’s energy security challenge. It also constitutes a major contribution by India to the global effort to meet the challenges of climate change. India is a tropical country, where sunshine is available for longer hours per day and in great intensity. Solar energy, therefore, has great potential as future energy source. It also has the advantage of permitting the decentralized distribution of energy, thereby empowering people at the grassroots level. The government is also giving incentives for solar power generation in the form of subsidies for various solar applications; and has set a goal that solar should contribute 7% of India’s total power production by 2022.

Page 12: IOTA 2013

iota2

IEEE DTU Student Branch

With such high targets, solar is going to play a key role in shaping the future of India’s power sector. In addition to the above, solar energy is one of the most promising and more predictable than other renewable sources and less vulnerable to change in seasonal weather. The graphical representation of contribution ofrenewable energy resources in Indian power sector as on 30th June 2013 is shown in Fig. 2.

Fig. 2. Contribution of Renewable Energy Resources in Indian Power Sector

India has been making continuous progress in conventional and renewable power generation. The trajectory growth of installed capacity since year 2002 and at present as on 31st July 2013 is given in Table 1.

Table 1. Contribution of Installed Power Generation Capacity

Source Time Period1.4.2002 1.4.2007 31.07.2013

Thermal (MW) 74429 87015 153848Hydro(>25 MW) 26269 34654 39623.4

Nuclear (MW) 2720 3900 4780Renewable (MW) 1628 10258 28709

It is observed that the renewable energy capacity has increased more than 6 times from 2% to 12.65% during last 10 years and is contributing around 4.4% electricity generation mix. In 2022, the projected share of renewable energy resources and the electricity mix would be e around 16% 6.4% respectively. From this data it is clearly seen that the role of renewable energy resources in meeting the future energy demand would be significant. The projected contribution of grid interactive renewable power along with the conventional power is shown in Table 2.

Table 2. Projected Contribution of Grid Interactive Power

Source Time Period31.3.2013 31.3.2017 31.3.2022

Conventional 198251.39 283000 383000

Wind

28708.95

27300 38500Small Hydro 5000 6600

Biomass 5100 7300

Solar Power 4000 20000

Total 226960.34 324300 455100

The projected contribution of renewable energy and conventional power as on 31st March 2022 is also presented in Fig. 3.

Fig. 3. Renewable and Conventional Power in Future Power Systems

From this study it is concluded that the demand of power is increasing drastically. The demand of power would be around 5 times of the present installed capacity by 2030. In order to bridging the gap alternative resources including renewable energy resources should be harness at maximum level with improved conversion efficiency. Further, the proposed power system should be intelligent and smart as compared to the present system. Some of the features of the future power system include intelligent load and energy resource management through intelligent controllers and advanced control techniques, accommodation of multiple energy resources ensuring maximum utilization of RES, self-healing.

Page 13: IOTA 2013

IEEE DTU Student Branch

iota 3

INTRODUCTION

There is an increasing demand for presenting a one-fits-all “analog” front end solution due to rapid evolution of wireless services from simple voice/text to recently mul-timedia . These services use different standards and there-fore necessitate development of multi-standard transceiv-ers. An integrated multi-standard transceiver results in reduction of size, price, complexity, power consumption etc. Its architecture has parameters that can be modified in order to be able to adapt to the specifications of each standard . The transceivers can be designed by using the practical approach of connecting the elements in parallel that handle various standards or using reconfigurable ele-ments. The reconfigurable filters are integral components of multi-standard transceiver. The recently introduced frequency agile filter (FAF) may be used in transceivers and is characterized by adjustment range; reconfigurabil-ity against tunability, and agility. The literature survey shows that a limited number of topologies of active FAF are available and are based on opamp and current mode active block and CMOS . The main intention of this paper is to present a CDTA based frequency agile filter. A. Basic Terminologies

The basic terminologies related to agile filter are described below:

1.) Adjustment Range: It is necessary to make adjustment to the center frequency of filters for congruous tuning of multi-standard transceivers. The range of adjustment of center frequency (f0) for filters is termed as adjustment range. If the filter center frequency is considered as f0 , then this frequency can be adjusted between two frequen-cies i.e. f0min and f0max. Then, the adjustment range of filter starts from f0min to f0max.

2.) Tunability: The tuning ratio of filter is given by (1).

(1)

The tunable filter is defined as filter in which tuning of center frequency f0 in short specific range is possible to nullify the effect of drift i.e. the value of n is small.

3.) Reconfigurability: The reconfigurable filters are the tunable filter in which center frequency is varied over a wide frequency range i.e. the tuning ratio has large value.

4.) Agility: A frequency agile filter (Hoping Filter) is a re-configurable filter which can switch over two frequencies f1 and f2 very quickly.

B. Implementation Scheme of FAF

The implementation scheme of frequency agile filter (FAF) is described in this section.

1.) Class Zero FAF: The implementation of FAF is based on a classical second order filter having one input (IIN) and two outputs, band pass output (IBP) and low pass out-put (ILP) as depicted in Fig. 1 [3]. This second order filter

is called Class Zero FAF. The transfer functions of class 0 FAF are given by (2)-(3).

(2) (3)

The center frequency and quality factor of the filter are represented by (4)-(5) respectively.

Realization of CDTA Based Frequency Agile Filter

Abstract - This paper presents frequency agile filter based on current difference transconductance amplifier (CDTA). The agile filters used in this work provide high agilty, tunability and quality factor while they are fully integrated configurations and not discrete systems. The use of grounded capacitors and resistor makes these structures suitable for integration. The functional verification is exhibited through extensive SPICE simula-tions using 0.25µm TSMC CMOS technology model parameters. The performance evaluation is made in terms of power dissipation, signal to noise ratio (SNR) and output noise.

0max

0min

fn f=

Class Zero FAF

IBP

ILP

Fig. 1. Class Zero FAF

IIN

2( )1

BPBP

IN

I psT sI as bs

= =+ +

2( )1

LPLP

IN

I qT sI as bs

= =+ +

01

2f

bπ=

bQa

=

Dr. Neeta Pandey, Mrs. Rajeshwari Pandey, Richa Choudhary, Aseem Sayal , Manan Tripathi

Page 14: IOTA 2013

iota4

IEEE DTU Student Branch

2.) Class 1 FAF: In class 1 filter, the low pass output of the class 0 FAF is amplified (with variable gain A) and fed back to the input. The basic block diagram of class 1 FAF

is shown in Fig. 2.The characteristic frequency of Class 1 FAF is given by (6). (6) where f0 is center frequency of class 0 FAF and A is gain of amplifier. The Q-factor, QA of class 1 FAF is given by (7). (7)

3.) Class n FAF: The method outlined for class 1 FAF realization can be extended for class n FAF implementation. Fig. 3 shows the implementation of class n FAF. This will require n amplifiers to be placed in n feedback paths obtained in the same way as done in class 1 implementation. It is noticeable that only adjustable-gain amplifiers with gain A are required along with class 0 FAF. The characteristic parameters of nth class FAF are given by (8)-(9).

II. DESIGN OF CDTA BASED FAF A. CDTA CDTA [11]-[18] is an active and versatile circuit element which is free from parasitic input capacitances and can operate in a wide frequency range due to its current-mode operation. It consists of a unity-gain current source controlled by the difference of the input currents and a multi-output transconductance amplifier providing electronic tunability through its transconductance gain. The CDTA symbol is shown in Fig. 4 and its terminal characteristic in matrix form are given by (10).

where, gm is transconductance of the CDTA. The CMOS implementation of CDTA [16] is given in Fig. 5. It comprises of a current difference (Mc1–Mc17) [16] which is followed by transconductance amplifier (Mc18–Mc26). The value of transconductance (gm) is expressed as (11) which can be adjusted by bias current IBias of CDTA

0 0 (1 )Af f Aq= +

(1 )AQ Q Aq= +

19,212 ( / )m ox Biasg C W L Iµ=

Page 15: IOTA 2013

IEEE DTU Student Branch

iota 5

A. Design of CDTA based class zero FAF The single CDTA based second order filter provides both low pass and band pass responses. Therefore it can be used as class zero FAF. The circuit is shown in Fig. 6. which employs a single CDTA block, two grounded capacitors and one resistor. The low pass and band pass transfer func-tions of CDTA based class 0 Frequency agile filter are giv-en by (12)-(13) respectively.

The center frequency and quality factor of class 0 FAF are:

B. Design of CDTA based Class 1 FAF

Fig. 7. shows the CDTA based current mode class 1 FAF implementation. The CDTA block in the feedback path is used as amplifier with tunable gain A. The expression of gain A of CDTA based amplifier is given by (16).

2 2A g R= This gain is adjusted by varying IBias2 to obtain central frequency f0A higher than f0. In the realization of class 1 FAF, the bias current of class 0 FAF is kept constant while amplifier’s gain is varied by changing IBias2 to obtain dif-ferent center frequencies. The CDTA based class 1 FAF employs two CDTA blocks, two grounded capacitors and two resistors.

The center frequency and quality factor of the CDTA based class 1 FAF are expressed by (17)-(18) respectively.

III. SIMULATION RESULTS The theoretical proposition were verified through SPICE simulations using TSMC 0.25μm CMOS process model parameters. The CMOS schematic of Fig. 5 is used for CDTA and the MOS dimensions of the transistors in CDTA that were used in the implementation of the CDTA-based FAF structure are given in Table I. The supply voltages of VDD = -VSS = 1.8 V are used.

IV. PERFORMANCE CHARACTERISTICS

The performance of proposed CDTA based FAF circuits is studied in terms of output noise, power dissipation, to-tal and signal to noise ratio (SNR) and are summarized

Page 16: IOTA 2013

iota6

IEEE DTU Student Branch

in Table VI and VII respectively. Maximum output noise voltage is calculated from Noise Analysis Curve of CDTA based class 0 and class1 FAF (Fig.10 and Fig 13 respec-tively) .The maximum value of SNR is taken from Fig. 9 and Fig. 12.The power dissipation is obtained through simulations

V. CONCLUSION

In this article, a detailed examination of current differ-ence transconductance amplifier (CDTA) based frequency agile filter was performed. The agile filter topologies use grounded capacitors and are suitable from an integration point of view. The filter configurations in class 0 and class 1 are evaluated in terms of power dissipation, SNR and noise performance. The increment is observed in power dissipation with increase in bias current.

Page 17: IOTA 2013

IEEE DTU Student Branch

iota 7

DNA Cryptography using Finite Field Arithmetic

Harsh Bhasin, Department of Computer Engineering, Delhi Technological University

Abstract - DNA Cryptography in spite of being one of the most electrifying forms of cryptography, suffers from the dearth of the number of keys. In its simulation a plain text is mapped with a particular codon and decrypted by the mapping table. In the work a novel method of DNA cryptography has been proposed and implemented which makes use of finite field arithmetic to make the cryptographic system comparable to the best of the cryptographic techniques and still retaining the blend of the nature. The work proposed has been implemented and analyzed. The results were encouraging. The work opens the door of DNA cryptography to refined finite field arithmetic and promises providing a new methodology which overcomes most of the shortcomings of the existing ones.I. INTRODUCTIONData security is one of the most essential ingredients of communication. Security is required while dealing with electronic transactions, government communications and data access. So as to protect the data, many key generation algorithms have been proposed. Some of them are purely mathematical in nature and the only criterion is to have random numbers, which satisfy all the standard tests. Such algorithms include DES & AES etc. The bottom line of the above theory is that the numbers of keys generated are too large to apply brute force analysis. The point remains that the above algorithms are mathematical in nature and therefore can easily be backtracked. The other types of al-gorithms use a blend of nature to generate the key which is practically impossible to backtrack. The work proposed clubs of one such type called DNA cryptography with finite field arithmetic to produce a key which will have quality as good as mathematical algorithms, if not better. At the same time they will be using an imitation of natural phenome-non which makes it practically impossible to backtrack. The work proposed is based on cryptography using DNA which has been explained in the later sections. And it uses finite field arithmetic for the encoding part of the DNAs. Many papers have been studied and some of them imple-mented so as to gather an idea of how better the proposed algorithm is. The literature review and the conclusion part discuss the above point. The algorithm if accepted cannot be a final word but a window of opportunity to incorporate finite field arithmetic with DNA cryptography to generate keys which takes less time to produce but are as strong as keys produced in any other mathematical algorithm.

II. LITERATURE REVIEWIn the above work each alphabet is mapped to a codon. There are 26 such possibilities whereas they could have generated 64 keys, had each base taken individually. Now, for each nucleotide a 2 digit number, which is taken, could have been random thus, generating 99 keys instead of just 4 mentioned in the paper. The total number of keys gen-

erated in the work was just 26*4 whereas minor changes would have increased the number of keys to 64*100. The other papers which were studied were better in the sense the number of keys were much more hence making the decryption difficult. It is a valid point that DNA cryptog-raphy is being explored worldwide but minor changes in the above work can make large difference in the number of keys that can be generated. There must also be a real-ization that the existing cryptographic techniques are far ahead of the work proposed in the previous techniques. The work when implemented failed to satisfy the general tests for random numbers like frequency test, gap test etc. It is therefore necessary to modify the work so as to make DNA cryptography acceptable. The earlier cryptographic techniques like DES produced 72 quadrillion keys and was still considered inapt and therefore replaced by AES. Generating a few keys an algorithm cannot face or replace DES or AES.

III. DNADeoxyribonucleic acid is responsible for genetic inher-itance. It is a polymer whose monomer is nucleotide. Each nucleotide consists of a sugar, a base and a phos-phate group. There are 4 different types of nucleotides A is for Adenine, G is for Guanine, C is for Cytosine and T is for Thymine. There are two types of bases found in DNA. The larger are called Purines and the smaller ones are called Pyramidines. Purines are of 2 types Adenine (A) & Guanine (G) and Pyramidines are Cytosine (C) & Thy-mine (T). DNA also contains a Deoxyribose sugar which is the backbone of DNA. There are nucleosides which are attached to sugar. The sugar in ribonucleosides is ribose. It is composed of long strands of nucleotides in a helix sort. The bases will only bond with a specific base and these lead to the formation of DNA molecule with a helix struc-ture. The bases are arranged in triplets. DNA sequences occur in complementary base pairs in strands and therefore one DNA sequence can determine the order of the comple-mentary sequence.

Page 18: IOTA 2013

iota8

IEEE DTU Student Branch

IV. BIOTECHNOLOGICAL METHODSSuch methods have been developed for DNA & RNA strands and they use DNA as a medium of ultra scale com-putation. These methods come under a class called bio-mo-lecular computation, a blend of which was also taken in the data encryption standard since finding out a key is nothing but a combinatorial search process. DNA can be used as a medium for ultra compact information storage as one gram of DNA contains 1021 DNA bases. Some of the scientists have also suggested a wet database of biological data. For the above to be successful the Data needs to be encoded in DNA strands. DNA based molecular cryptographic sys-tems encode the plaintext message into DNA strands and they are based on one time pads which are theoretically unbreakable. A closely related field is DNA steganograph-ic system where the original plain text is not encrypted but disguised. Cryptography based on DNA strands maps the text to the DNA strand in a random yet reversible way. This is the reason why in the first step codons have been selected randomly and for the decryption part a mapping table has been created. In the physical technique an en-coded DNA message is annealed in the code book and is extended, finally annealed into DNA chip whereas in the decryption the reversible process is used. V. FINITE FIELD ARITHMETICThe finite field arithmetic part used in the work mainly deals with addition. Addition operation is the most funda-mental arithmetic operation in the finite fields. The adder circuits are key to efficient finite field arithmetic to obtain speed and area efficiency, adders using redundant repre-sentation is preferable example, carry-save form i.e. x= xc +xs integer x is represented as sum of two integers carry xc and save xs part . Full adders connected in a pipelined man-ner can perform addition where one operand is redundant and other is non-redundant. Dual field adder (DFA) can perform addition both with and without carry for GF(p) and GF(2n) respectively because it has full adder equipped for both kinds of addition. It has an input fsel such that if fsel=1 then addition with carry will be performed in GF(p) mode else addition without carry is performed in GF(2n) mode or modulo-2 addition.DFA does not increasesCPD (Critical path delay). In DFA 3*2 adder arrays one of the operands is always in non-redundant form so it doesn’t works if both operands are redundant, 4*2 adder arrays are used in this case which has a drawback that it is difficult to perform subtraction due to a problem in representation of 2’s complement form because the carry overflow might be hidden in carry-save representation due to which this prob-lem occurs. RSD (Redundant signed digit) representation overcomes this difficulty. Integer is now represented by difference of two other integers so there is no need for 2’s complement representation to handle negative numbers and subtraction operation. RSD is a more natural represen-tation where both addition and subtraction operation need to be supported

.VI. PROPOSED WORKThe proposed system is named DNA Cryptographic sys-tem (DCS). The flow of the work; is as follows:The input or plain text is a string of alphabets, example a name, which is first encrypted using the DNA encrypter and produce cipher text 1 (CT1). These alphabets to base pairs are stored in mapping table1. Then CT1 which is a sequence of bases undergoes a conversion where nucle-otide bases A, C, T &G are assigned three digit random numbers with the help of nucleotide to random number conversion module and produce cipher text 2 (CT2). This mapping is stored in mapping table2. CT2 becomes an input to the finite field encrypter which in turn produces cipher text 3(CT3) which is the final cipher text. This goes through the secure channel to receiver’s side. At receiv-er’s side CT3 is decrypted first by the finite field decrypter producing decrypted text 2 (DT2). DT2 undergoes random number to nucleotide base conversion which is done by accessing mapping table2. DT2 is decrypted by by ac-cessing mapping table1 and thus producing the final plain text or PT which is the output. The detailed steps are explained as followsStep 1: First 64 CODONs are generated randomly using the nucleotide bases A, C, G, T.Now, for an input which is a string of alphabets example, a name, CODON is assigned at random for each alpha-bet which is stored in mapping table1 this gives the cipher text1.Step 2: Randomly generated three digit numbers are as-signed to the four nucleotide bases A, C, T & G. And there-fore CT1 generated through DNA encryption is now a ran-dom number sequence i.e. CT2 and these random number nucleotide base pair is stored mapping table 2.Step 3: CT2 undergoes Finite field decryption. Thus, CT3 is generated which is the final cipher text that the receiver side gets.Step 4: At receiver’s side CT3 undergoes Finite field de-cryption producing DT2 which is then converted by sub-stituting the random number by their corresponding nucle-otide base through mapping table2 and DT1 is obtained.Step 5: DT1 undergoes DNA decryption with the help of mapping table 1 producing the plain text.

VII. RESULTS AND CONCLUSIONSThe work proposed has been implemented and analyzed. The implementation has been done in .NET Framework 4.0 and the language used is C#. The work satisfies various tests of random numbers including Gap tests, Frequency tests. The coefficient of auto correlation of the set of key produced is good enough to be considered a good cryp-tographic system. The system has robustness of RSA to-gether with the goodness of DNA Cryptography. As per the analysis the system performs in a better way as com-pared to the systems based on DNA Cryptography pro-posed earlier.

Page 19: IOTA 2013

IEEE DTU Student Branch

iota 9

Collision Avoidance for Marine Vehicles using Fuzzy Logic

Rohit Kumar Singh (ECE) and Shashank Garg (EE), 3rd Year

I. INTRODUCTION

Technology for autonomous surface vessels has grown enormously in previous decades. Key technologies in-volved are navigation, propulsion and stabilization system, secured wireless telemetry and data link, anti-collision system for protection from other surface vessels. Most of these technologies were pioneered by defense companies and a few notable systems have emerged from their stable. Being developed by defense companies, most of the tech-nologies were closely guarded.

Stingray: An Unmanned Surface Vehicle

Only since last decade, there has been a considerable growth in accessible technology in this field. Here, we pro-pose a collision avoidance system for unmanned as well as large-tonnage manned vehicles using Fuzzy Logic. A special attention needs to be given to large tonnage ve-hicles, as they have very large momentum and very slow response to steering changes. This can be accounted in the navigation software. Various collision-avoidance methods for vessels have been put forward in the past which include control theories which require precise modeling of the ship and the surrounding as well, which is highly uncertain. A good choice for dealing with these types of uncertain sys-tems is fuzzy logic.Fuzzy logic is a practical inexpensive solution for con-

trolling complex ill-defined systems. It provides an easy platform to analyze and interpret complex systems which are regularly subjected to change. The procedure used for obtaining a usable output from a set of inputs using fuzzy logic involves three steps: fuzzification, inference using fuzzy inference engine and defuzzification. Fuzzy logic has wide variety of choices for membership functions, fuzzification and defuzzification methods so as to design a system best suited for our problem. It is very import-ant to choose an appropriate membership function so as to obtain correct outputs. The membership function chosen here works nicely and serves multiple purposes. The de-fuzzification method used here is the most commonly used method, the centroid method.

II. IMPLEMENTATION

Calculation of vehicle’s heading direction using fuzzy logic will require the knowledge of three things as required by any fuzzy logic system, viz, membership functions for inputs and outputs, fuzzy inference IF-THEN rules relat-ing inputs-outputs and third, fuzzification and defuzzifica-tion method. For operation, the inputs required to be sent to the fuzzy system will be position of the obstacle (an-gular), distance of the obstacle (linear) and speed of the watercraft. And outputs will be the speed of the vehicle and heading direction. For locating obstacles, region around the ship can be di-vided into sectors of concentric semicircles. Presence of any obstacle in the region will have associated with it two important information, one will be its distance from the vehicle, and second will be its angular position with re-spect to the vehicle.The inputs, angular position (phi) and linear distance (d_obs) determine the position of the obstacle. The most ba-sic membership function for these inputs considering the information we have in hand will be of the form as shown in Figure.It can be seen from figure that angular position (phi) has seven membership functions corresponding to seven divi-

Abstract: A proposed scheme for collision avoidance strategy for autonomous marine vehicles is as follows. This is an application of Fuzzy Logic, used for calculating an optimum speed and sailing direction for the ve-hicle. The system has been implemented in a small prototype vehicle for demonstration purpose, but is really suited for large size vehicles. Main usage for this system is envisioned in crowded harbors such as Mumbai Harbor and narrow shipping lanes such as Straits of Malacca.Keywords: marine vehicle collision avoidance, fuzzy logic

Page 20: IOTA 2013

iota10

IEEE DTU Student Branch

sions of the area around the vehicle. Similarly, the linear distance (d_obs) has three membership functions corre-sponding to three semicircular divisions of the area. It is clear from the membership functions that the presence of a single obstacle can be tracked easily but in case of mul-tiple obstacles, the system is not capable of locating all the obstacles as one variable can take only one input.The inputs required to correctly evaluate the steering angle are distance of the obstacle, speed of the ship and posi-tion of the obstacle. Therefore to we should receive the accurate inputs to obtain the correct steering angle. The position of the obstacle and the speed of the ship can be directly determined but the position of the obstacle needs some processing before being used as an input. The posi-tion of obstacles are detected by sensors available such as onboard Radar etc, and fed to fuzzy logic controller after processing.

III. ANGULAR DISTRIBUTION

As shown, the area around the ship has been divided in dif-ferent angles gradually increasing as the decreasing threat to ship from the obstacle. The highest priority is given to obstacles right in front.The angle distributions for input angle position of obstacle starting from right are as follows: -i) (0°, 45°)ii) (45°, 70°)iii) (70°, 85°)iv) (85°, 95°)v) (95°, 110°)vi) (110°, 135°)vii) (135°, 180°)

IV. MEMBERSHIP FUNCTION

The figure also shows the membership function of the front most divided area (-5°, 5°) or (85°, 95°). The membership function decided was the most suitable for our application for various reasons: i) The output angle was required to be a particular value rather than a varying quantity for even a minimal

change in obstacle. ii) The overall range of the membership function was kept (-90°, 90°), since we don’t have to care for obstacles behind the vehicle.This provides many benefits: i) The input value for all the function was same. That is wherever the obstacle may lie, the angular position value was sent to the inputs (all) and since the inputs them-selves distinguishes whether to be operational or not.ii) Weight of all the rules was kept 1 so as to simplify the calculations.iii) This worked nicely for multiple obstacles as well.

Page 21: IOTA 2013

IEEE DTU Student Branch

iota 11

V. CONFUSION CASE AND ELIMINATION

This system however was found to encounter a problem in a particular case. When the obstacle is present in the center the ship has liberty to move either to right or to the left which is not a problem in the case of a single obstacle. However, when there are multiple obstacles present this creates a problem when one of the obstacle is right at front. To tackle this we need to decide whether the ship should move towards right or left in case of the center obstacle. To do this another membership function for the center region was made which acts exactly in opposite way as that of the original center region membership function i.e. if the original membership function commands the ship to move to right, the new membership function will command it to move to left so that the operation of the logic can be switched accordingly. However, the input which needs to be activated has to be decided, which can be easily done using if-else conditions. The image shows the flow chart of the program.

VI. OUTCOME

Complete code can be downloaded from Github: http://bit.ly/13Fwsut

Page 22: IOTA 2013

iota12

IEEE DTU Student Branch

Abstract - With the increasing population, the quantity of Municipal Solid Waste (MSW) being generated in Delhi, the capital city of India is also increasing at an alarming rate. Presently, the inhabitants of Delhi generate about 7000 tons/day of MSW, which is projected to rise to 17,000–25,000 tons/day by the year 2021 and management of such huge quantities of waste is a challenge. Poorly managed waste has an enormous impact on health, local and global environment, and economy. Improperly managed waste usually results in downstream costs higher than what it would have cost to manage the waste properly in the first place. This study critically reviews the current waste management practices in Delhi. Then it proposes a sustainable model for management of this waste, which can help the Urban Local Bodies (ULBs) responsible for MSW management in preparing more efficient plans. It was found that recycling, composting, biomethanation and incineration together can lead to sustainable MSW management. This document calculates the energy poten-tial from the waste and also estimates the reduction in the amount of Greenhouse Gases (GHG) emitted. The lone operating incineration plant in Delhi is studied. Finally, the articleduction in the GHG emissions and would substantially reduce the load on the landfills.

Sustainable Model of Municipal Solid Waste Management in Delhi - Energy

Potential and Challenges

1.INTRODUCTION-FEASIBILTY OF INCINERA-TION IN DELHISolid waste management is a worldwide problem and it is becoming more and more complicated day by day due to rise in population, industrialization as well as changes in our life style. Presently most of the waste generated is either disposed of in an open dump in developing countries or in landfills in the developed ones. Landfilling as well as open dumping requires lot of land which is scarce in urban city like Delhi and could also result in several envi-ronmental problems. Emissions from landfill sites are the third largest contributors to global warming in India. There is an urgent need to work towards a sustainable solid waste management system, which is environmentally, economi-cally and socially sustainable. Waste to energy generation option can be an alternative for sustainable management of this waste and will be helpful in tackling this huge quantity of waste. Currently, about 7838 tonnes of MSW is generat-ed on a daily basis in Delhi. Out of which 87% is collected which amounts to 6796 tonnes/day. Of 6796 tonnes/day which is collected, only 1927 tonnes/day is treated. For the management of the waste which is collected, there are cur-rently three landfill sites in Delhi (Bhalswa landfill, Ghazi-pur landfill and Okhla landfill), three composting plants (Bhalswa plant, Narela/Bawana plant and Okhla plant), one incineration plant (at Okhla which also generates 16 MW of power from the waste), one RDF plant (at Nare-la/Bawana), an done Construction & Demolition waste dump (at Burari).[5]Population of Delhi is increasing at an alarming rate and it is estimated that the waste generation is poised to touch 18,000 TPD by 2021.State of the Del-

hi’s landfills is not very encouraging with all the landfills way beyond their lifespan. All the three landfill sites have crossed the 30-metre mark.Hence incineration can act as a better alternative which can decrease pressure on land. Moreover switching to incineration will even reduce meth-ane, a major green house gas produced from landfills. This article explores the potential of applying incineration of MSW in Delhi. if successfully employed, it can result in energy generation from the waste; reduction in the GHG emissions; and redcution in the load on landfills. This arti-cle attempts to calculate the energy potential from the total MSW which is generated in Delhi; the subsequent GHG reduction as a result of energy generation from MSW and reduction in the load on landfills.2.ASSESSMENT OF ENERGY RECOVERY POTEN-TIAL, GHG REDUCTION3.1Energy recovery potential An assessment of the potential of energy recovery from MSW in Delhi is made from the knowledge of its calorific value and amount of incinerable matter. It is assumed that about 35% of the total waste which is produced is inciner-able after reduction of the moisture content (Assumption is based on the experience of the incineration plant at Okhla). Total MSW generated in Delhi = 7838 tonnes/day. Total Waste quantity to be incinerated = 7838 x 35% tonnes/day = W tonnes/dayNet calorific value of waste = NCV kcal/kg = 1250 kcal/kg Energy recovery potential = NCV x W x 1.16 kWhPower generation potential = 1.16 x NCV x W/24 kWAssuming the conversion efficiency as 30% (Conversion efficiency is the efficiency at which the turbine and gener

Prof. Lovleen Gupta , Shivangi Garg, Vipul Vaid Department of Environmental Engineering, Delhi Technological University

Page 23: IOTA 2013

IEEE DTU Student Branch

iota 13

ator converts the fuel into electricity) Net power generation potential = 1.16 x 0.30 x NCV x W/24 kW = 0.0145 x NCV x W kW = 0.0145 x 1250 x 7838 x 0.35 kW = 49.72 MW; ~ 49 MWNet power generation potential = 49 MW3.2 GHG reductionsAmount of GHG reduced = MBy + EGy x EFgridwhere: MBy: The amount of CO2 produced from the equivalent methane produced in the landfill if all the waste was sent to the landfill (t CO2 produced) EGy: Amount of electricity supplied to the grid which was produced from waste and would othervise been produced from the grid mix (fossil fuel, hydro, wind, etc.) (MWh)EFgrid: Emission factor of the grid (tCO2/MWh)MBy generation estimation:The amount of methane generated from the landfill site is calculated based on a first order decay model (As an approximation, methane generation is the landfill is de-scribed as a function of time according to a first order de-cay process). The model differentiates between different types of waste (j) with respective decay constants (kj) and factions of degradable organic carbon (DOCj). The equa-tion for methane calculation is given by:With time, the amount of methane generated from the

)1(1216

1

)(,4

kjy

x j

xykjjxjyfCHyy eeDOCWMCFDOCFGWPMB −

=

−− −•••••••••Φ= ∑∑

landfill will increase. But to be on the conservative side, the methane emissions are calculated for just one year after the waste was put in the landfill.Putting all these values in the equation 1, MBy = 257653 tCO2/year From the data given in table 1, the biodegradables present in the waste is 73.7%. From the literature, the kj value of biodegradable matter is 0.05Amount of CO2 saved by producing electricity from wasteEFgrid, y = 0.9583 tCO2/MWh [12] Assuming 330 days of operation of the plant,

Electricity produced = 49 x 330 x 24 MWhAmount of CO2 saved = 49 x 330 x 24 x 0.958 tCO2 = 371780 tCO2Total GHG reduction = 257653 + 371780 tCO2Total GHG reduction = 629433 tCO2/yearFrom the above calculations, it is clear that if we use incin-eration of MSW, it is possible to generate about 50 MW of electricity, which would othervise have been produced by the use of fossil fuels. Also, GHG are reduced substantial-ly, had we dumped all the waste in the landfill site.

3.SUSTAINABLE MODEL FOR MSW MANAGE-MENT IN DELHIThe model is designed keeping in mind the current segre-gation scenario in Delhi. Since, the source segregation of organics is extremely poor in Delhi, composting does not seem to be a good option. It is proposed that the recycla-bles (like plastic, paper, metal, glass) should be removed from the waste and sent to the recyclable industry. The rest of the waste including biodegradable matter and inerts should be sent to the incinerator. The advantage we get by sending the inerts for combustion is that it reduces the volume of the waste to be further disposed off. The bottom ash (byproduct of incineration) thus produced should be either sent to the landfills or the potential of using it in construction of roads may be explored.

4.CHALLENGES AND CONCLUSIONIncineration is well researched but because of high capital costs and emissions is not popular in India. Lack of source segregated waste is a major challenge. Cost of energy gen-eration is on the higher side due to the costs incurred in pollution control mechanisms.There is a lack of public acceptance to such projects and a negative perception in the minds of people.ULB seek royalty instead of providing tipping fee. Land is provided on lease for waste processing but not for power generation. Bidders for waste processing plant don’t bid with WTE is an option because CAPEX for WTE is higher and revenue accrual is lower. From the data presented in above article we conclude that MSW to energy is a sustainable approach for tackling future energy crisis. GHG emissions and reduced load on landfills are and advantage of this approach of MSW management.Lack of segregated waste as well as public acceptance to such projects will remain a challenge to this which can be overcome by sound policies and planning.

Page 24: IOTA 2013

iota14

IEEE DTU Student Branch

I. INTRODUCTION

As per computer science concepts, any algorithm which runs in polynomial time (a complexity of O(c*n^x), where c is a constant and x is the degree of polynomial) is an efficient algorithm for solving problems. But there exists a certain set of problems that cannot be solved in polynomial time. Cook, Karp, and others, defined such class of prob-lems as NP-hard problems. In this article, we are dealing with one such problem known as Maximum Clique Prob-lem. The problem requires to find a complete sub-graph with maximum cardinality. We cannot apply brute force as computation complexity is too high. As there are too many cases and that can be sorted according to a fitness value so the genetic algorithms are applicable. Here we have linear fitness function x, where x is the number of vertices if chromosome gives complete graph else x is 0. We have analyzed the algorithm for different graphs with satisfactory results. The algorithm is better than random-ized algorithms.

The article has been organized as follows. It first explains the NP- Complete problems which are then followed by the problem statement. It explains about the concept of ge-netic algorithms with the computer science version of bi-ological concepts like crossover and mutation. Then there is a section of roulette wheel selection procedure. Next the results of the variousexperiments performed are described and in the end, there is a conclusion ofthe research work done. In our study, we believe that natural biological con-cepts can be implemented very efficiently to solve prob-lems in computer science.

II. NP COMPLETE

Algorithms can be divided into two sets:1. P-type problems: The problems whose solution exist and can be calculated in the polynomial Time. For ex-ample: Linear Search, various Sorting Algorithms, etc. 2. NP-type Problems: NP problems are those problems

whose solution can be verified in the polynomial time. Since the solution of the P-type problem can be verified in the Polynomial time, thus all P-type problems come un-der these NP problems. These problems are further divided into two types of problems:•NP complete: The problems whose algorithms have not been discovered yet but it has not been proved too that their solution does not exist falls under NP complete prob-lems.•NP hard: NP hard problems are as difficult as the NP com-plete problems. It can be search problem, decision prob-lem or any optimization problem.

III. PROBLEM STATEMENT

Given an undirected graph G= (V, E), V is the set of ver-tices and E is the set of edges, such that there exists a set of vertices in which there is an edge between every pair of vertices i.e. for every (u, v) belonging to the set of the vertices in the sub-graph, we have (u,v) edge between them. This is called a clique in the graph. In other words, a clique is a complete sub-graph of a graph. The number of vertices in the clique is the measure of size of clique. The Clique Problem is the optimization problem for finding a clique with maximum cardinality. The clique problem is an NP-complete problem. This point can be proved by fol lowing illustration. Let us say we have graph with size n (greater than 40), then the total number of sub-graphs are 2^n-1 which is huge number for given n. Even a su-per-computer will take many years to elucidate all the

Optimized solution to NP-Complete ‘Maximum Clique Problem’ using

Genetic AlgorithmsGitanshu Behal and Shivani Choudhary, 3rd Year,

Department of Computer Engineering, Delhi Technological University

Abstract - The Maximum Clique Problem is an NP-Complete problem which has diversified applications in technical fields. Here, we propose a solution to this problem using Genetic Algorithms (GAs).The work suggests that natural biological concepts like inheritance and evolution can be implemented very efficiently to solve problems in computer science.

Page 25: IOTA 2013

IEEE DTU Student Branch

iota 15

sub-graphs and find the maximum one. So we can conclude that clique problem indeed is an NP-complete problem.

IV. GENETIC ALGORITHM

In the computer science field of artificial intelligence, a genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution. This heuristic is routine-ly used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larg-er class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques in-spired by natural evolution, such as inheritance, mutation, selection, and crossover. Genetic algorithms find applica-tion in bioinformatics, computational science, engineer-ing, economics, chemistry, manufacturing, mathematics, physics and other fields. In the genetic algorithm, a popu-lation of candidate solutions leads to a better solution for an optimizing problem. Each candidate solution has a set of attributes (its chromosomes or genotype) which might be mutated and altered. Historically, solutions are repre-sented in binary as strings of 0s and 1s; however different encodings also are attainable.

The evolution usually starts from a population of ran-domly generated individuals and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual’s genome is modified (recombined and possibly randomly mutated) to form a new generation.

The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algo-rithm terminates when either a maximum number of gen-erations has been produced, or a satisfactory fitness level has been reached for the population.

A typical genetic algorithm requires:1. A Genetic Representation (a binary string) of the solution domain,2. A Fitness Function to evaluate the solution domain.

A standard representation of each candidate solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facil-itates simple crossover operations. Variable length repre-sentations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form repre-sentations are explored in evolutionary programming; a

mix of both linear chromosomes and trees is explored in gene expression programming.

Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive appli-cation of the mutation, crossover, inversion and selection operators.

A Simple Genetic Algorithm: Given a clearly defined problem to be solved and a bit string representation for candidate solutions, a simpleGA works as follows:1. Start with a randomly generated population of n l−bit chromosomes (candidate solutions to a problem).2. Calculate the fitness ƒ(x) of each chromosome x in the population.3. Repeat the following steps until n offspring have been created:a. Select a pair of parent chromosomes from the current population, the probability of selection being an increasing function of fitness. Selection is done “with replacement,” meaning that the same chromosome can be selected more than once to become a parent.b. With probability pc (the “crossover probability” or “crossover rate”), cross over the pair at a randomly cho-sen point (chosen with uniform probability) to form two offspring. If no crossover takes place, form two offspring that are exact copies of their respective parents. (Note that here the crossover rate is defined to be the probability that two parents will cross over in a single point. There are also “multi−point crossover” versions of the GA in which the crossover rate for a

pair of parents is the number of points at which a crossover takes place.)c. Mutate the two offspring at each locus with probability pm (the mutation probability or mutation rate), and place the resulting chromosomes in the new population. If n is odd, one new population member can be discarded at ran-dom.4. Replace the current population with the new population.5. Go to step 2.

V. CROSSOVER

The attribute of chromosome are intermingled using the process of crossover. For the crossover, the numbers of chromosomes are calculated by the formula:No. of chromosomes= (No. of cells in a chromosome * No. of chromosomes * crossover rate) / 100 Where crossover rate is kept between 2-5%.Then two chromosomes are randomly chosen and then the no. of cut points are generated. The following is the way of generating 2-point crossover.

Page 26: IOTA 2013

iota16

IEEE DTU Student Branch

VI. MUTATION

To break the local maxima, mutation is performed on the randomly selected chromosome. It is a genetic operator which provides necessary condition to maintain genetic diversity from one generation of population to the next generation of population. Mutation operator randomly flips bits in chromosomes.The no of mutations is calculated as:No. of chromosomes = (No. of cells in chromosome *No. of chromosome*mutation rate)/100Where mutation rate is kept around 0.5 %.

VII. ROULETTE WHEEL SELECTION

This selection method increases the chances of fittest chro-mosome to be selected and replicated. Fitness of a chro-mosome is defined by a fitness function of chromosomes. The fitness function is chosen according to the problem generally sigmoid function is used. Selection procedure is, each chromosome is assigned cumulative fitness value till that chromosome. Then a random natural number is chosen having value not more than the total fitness sum of all the chromosomes. A chromosome is chosen having the value just greater than that random number. Chosen chromosome is replicated in the population. Number of replications are:No. of chromosomes = (No. of cells in chromosome *No. of chromosome*selection rate)/100Where selection rate is kept around 0.5 %.After final itera-tion one of the fittest chromosomes becomes the final solu-tion. There is a very high probability of the chromosome being the best solution.

VIII. RESULTS

We performed the experiment to check whether genetics algorithm giving the optimized results for clique prob-

lem or not. As it’s already known that clique problem is an NP-Complete problem so we cannot verify our results. Because of this reason we created the graphs in which maximum clique was already known i.e. we intentionally embedded the largest clique. For example, if a graph with 30 vertices has 434 edges it means the largest clique will be of size 29, similarly we can create different test graphs. The algorithm was tested on 40 graphs with the size rang-ing between 5 and 50. 34 out of 40 times, genetic algo-rithms have given optimized results. Some of the results are shown in the table below.

From above table, we can conclude that as the graph size increases, probability of getting optimized result decreas-es.

Page 27: IOTA 2013

IEEE DTU Student Branch

iota 17

I. INTRODUCTION

The current trend of use of superlatives in concrete tech-nology may strike as somewhat disconcerting to many .We had high strength concrete, hyperplasticiser, and su-perplasticisers, very reactive Pozzolana, and now high per-formance concrete. It is difficult to imagine any concrete being manufactured and used, which is not intended to perform to the extent; high performance concrete is not a new material of construction. It is difficult to imagine any concrete being manufacture and used, which is not intend-ed to perform. The only difference is the level of perfor-mance, which is higher than ordinary. High Performance Lightweight Concrete (HPLC) has been extensively inves-tigated for, among other applications, use in oil drilling platforms in severe environments. The relationships and allowable stresses and the stress block given in structural codes for normal strength concrete (e.g. IS 456 or IRC 21) will require modification. Acceptance testing on site has to be more than cube testing at28 days. Where dura-bility of concrete is the driving force for adoption of high performance concrete, in-situ permeability tests are per-formed as a matter of routine.

II. DEFINITIONS

Many attempts have been made to define high performance concrete .A quantitative definition is that

• It should have a maximum water cement ratio of 3:5 a minimum durability factor of 80% in freeze-thaw resis-tance test as per ASTM C666,a minimum compressive strength of 21Mpa at 4hours, 34Mpa at 1day, or 69Mpa at 28days.

However, such quantitative definitions may not satisfacto-ry in all situations. Among general qualitative definitions is

•High performance concrete is defined as concrete which meets special performance and uniformity requirements that cannot always be achieved routinely by using only conventional materials and normal mixing, placing, and curing practices. The requirements may involve enhance-ments of characteristics such as placement and compaction without segregation, long-term mechanical properties, ear-ly-age strength, toughness, volume stability, or service life in severe environments.

For high strength, water cement ratio should below. The strength water-cement ratio rule holds good for concrete

strength of about 100Mpa or more. Low water-cement ra-tio is also required for low permeability of concrete, which is vital for high durability.

III. MECHANISM OF HIGH-PERFORMANCE

High strength and low permeability are logical develop-ment of presence of silica fume and superplasticisers in concrete the dual requirements of high strength and low permeability are linked to each other through the need to reduce the volume of relatively larger capillary pores. As pointed out earlier, this is achieved by low water-cement ratio as well as dense packing of fine particles.The role of superplasticisers, long chain molecule organic compound, is to get adsorbed on to cement grain, impart a negative charge to them, which repel each other and get deflocculat-ed and disperse. the resulting improvement in workability of concrete could be either to flowing concrete for same water& cement contents as in the control mix. Alternative-ly, it enables water content to be reduced by 20% or more and results in high strength, because of low water- cement

ratio.

The role of silica is many fold because of enormous sur-facearea as well as relatively large content of glassy silica, it is very reactive pozzolana. When contribution of sili-ca fume to compressive strength of concrete is compared basedon water- cement ratio in mix, mixes containing sil-ica has high compressive strength at 28days (fig). The ef-fect of silica fume is greater than that of cement replaced; the ‘efficiency factor’ is about three. In other words, 20Kg of silica fumes can be replace about 60Kg of cement and yield same strength. It also helps in augmenting early strength of concrete. Since the heats of hydration of both are of same order, incorporation of silica fume enables the heat rises in concrete to be lowered - a critical

High Performance ConcreteAbhinav Daalia and Abhishek Anand, 3rd Year,

Department of Civil Engineering, Delhi Technological University

Page 28: IOTA 2013

iota18

IEEE DTU Student Branch

advantage for mass concrete. However, incorporation of silica fume in concrete increases the water demand. Hence, superplasticisers are required.

Dense packing is another basis of superior performance of concrete containing silica fume and superplasticiser. The cement grains, which tend to flock together, are dis-persed by superplasticiser. The extremely fine silica fume particles are then packed in the space between dis-persed cement grains and normally packed fine and coarse aggregate. The mechanism is schematically shown in (fig). The overall result is a denser microstructure. The concrete exhibit less porosity with no evidence of capillary pores and only very narrow less than 0.5mm gel pores is visible under high magnification in electron microscope. C-S-H gel particles in concrete containing silica fumes appear not as individual particles, but rather as a massive, dense struc-ture. By residing in the pores in hydrated cement paste, silica fumes particles, on hydration, block the pores. Such pore -refining action reduces the size of pores,although the overall porosity may remain the same shows some results

Fig. Packing of cement paste containing superplasticis-er and silica fume

Another important mechanism is improving the transition zone around aggregate particles. In normal strength con-crete with only cement, the transition zone around aggre-gate is 20mm to 100mm wide and richer in calcium hy-droxide and ettringite, as against C-S-H phase in the bulk matrix. The porosity is also higher. Thus, the transition zone forms aweak link. In presence of silica fume, dense C-S-H occupies all the space around aggregate and direct bond with aggregate is established. Result of strengthened transition zone isreduction in microcracking at the inter-face between cement paste and aggregate. The stress-strain curve remains linear up to about 85 % of failure stress or higher.

1V. CONCLUSIONS

Because of altered microstructure and improved transition zone, engineering properties like tensile strength, modulus of rupture , elastic modulus etc. as functions of compres-sive strength as well as limi6ting ultimate strain are dif-ferent in case of high performance concrete. The relation-ships and allowable stresses and the stress block given in

structural codes for normal strength concrete (e.g. IS 456 or IRC 21) will require modification. Acceptance testing on site has to be more than cube testing at28 days. Where durability of concrete is the driving force for adoption of high performance concrete, in-situ permeability tests are performed as a matter of routine. Water permeability test (DIN 10480), initial surface absorption test (BS 1881) and rapid chloride permeability test (ASTM C1202 or AASH-TO T 277) are suitable. Change in the mindset is required to accept pozzolanic additives to concrete or blended ce-ments made with them.

Page 29: IOTA 2013

IEEE DTU Student Branch

iota 19

Population Estimation and Tracking of Whales and Dolphins- It’s all about the

acoustics and their clicks!!Prateek Murgai(3rd year), Electrical and Electronics Engineering

Marine mammals interact with their species and the un-dersea world by emitting and detecting “bio-acoustic” sounds. Communication by toothed whales including sperm whales and all dolphins is characterized by regu-lar sonar “click” sounds focused over a narrow beam-like bandwidth. Marine scientists use arrays of hydrophones (underwater mic) to monitor these unique underwater acoustic waves in order to study the behaviour of these mammals. This technology is critical for tracking sperm whales and endangered river dolphins such as the Ganges river dolphins and Irrawaddy dolphins in Chilika, India.However, there are daunting challenges in order to suc-cessfully track and monitor the behaviour of whales and dolphins. First, marine mammals produce species-specific bio-sonar click pulses that must be detected by underwater hydrophones and distinguished from background noises generated by fluctuating hydrodynamic conditions. Fur-thermore, the combination of the unpredictable three-di-mensional mobility of these animals and the narrow beam-width of the clicks makes it difficult to acoustically track either solitary or groups of animals.Notably, unwieldy sensor arrays with long baselines (large array size) of hundreds of metres are required to track sperm whales because these animals routinely dive hun-dreds of metres below the surface. Thus there is demand for the development of innovative acoustic tracking sys-tems consisting of short baseline arrays( short array size) of hydrophones with robust algorithms to discriminate be-tween individual animals.For engineers like us such a task can be broken down into two parts namely the bearing (direction) estimation and then the position estimation of these dolphins and the whales. There are many techniques to determine the posi-tion of the origin of the clicks based on energy densities, intensity of the signals being received but the most com-mon approach for passive localization of the sound source is to use the time delays between a pair of sensors also called as Time Difference of Arrivals (TDOA) to define curves of constant path difference which are hyperbolas of revolution also called as hyperboloids. Intersection of these hyperboloids yields the position of the whales or the sound source. As a single pair of sensors yields a single TDOA or a single hyperboloid, to get a closed solution or a definite position of the mammals in three dimensional space we need the intersection of atleast three hyperboloids that means a set of minimum of three values of TDOA i.e.

atleast an array of four sensors in which all sensors cannot lie in the same plane.

The estimation of the TDOA between any two pair of sen-sors is basically done by picking up the two signals at these two sensors and then trying to find the time delay between these signals or the point of maximum similarity between the signals. Many techniques such as basic cross-correla-tion in time domain can be performed to estimate the time delays but for better time delay estimation we transfer the signals into frequency domain and then apply some weigh-ing functions or phase transforms to estimate the time de-lays. These methods prove to be better as they reject the regions of least similarities and enhance the regions of maximum similarity thus removing the unwanted signal correlation.In developing a real time passive sonar localization system it is quite tedious to solve numerically and computational-ly the intersection of these hyperboloids also the error in-troduced due to small changes in eccentricity is also quite large thus we modify the position estimation equations or the multilateration equations such that we need to solve for intersecting spheres rather than the intersecting hyper-boloids which makes our task easier and more efficient. As these mammals move with speeds much less than the speed of sound (1500m/s in water) then normally there is no need to introduce the Doppler effect compensation into picture.

In conclusion this above devised system proves to help-ful as it is impossible to visually observe and collect data about the underwater behaviour of these marine mammals – information that is required to understand and mitigate threats to their survival. These marine mammals are at the top of the food chain in their habitats and are an indicator of the health of the ecosystem thus information regarding feeding and group behaviour and migration routes is indis-pensable for devising strategy for their conservation. This research and the acoustic technology and signal process-ing method enables precise monitoring of the underwater movements of these mammals which prove to be critical tools for marine scientists and conservationists.

Page 30: IOTA 2013

iota20

IEEE DTU Student Branch

QUADROTOR : An OverveiwSWATI GUPTA, 3rd Year, BT, DTU

VAIBHAV SAINI, 3rd Year, ME, DTU

I. INTRODUCTION

A Quad rotor is very well modelled aerial vehicle with four rotors in a cross configuration. Due to its ease of maintenance, high manoeuvrability, vertical takeoff and landing capabilities (VTOL), it is being increasingly used now days.

Advantages DrawbacksSimple mechanics Large size and mass

High payload High energy con-sumption

Reduced gyroscopic effects

Capable of hovering.

This cross structure is quite thin and light, however it shows robustness by linking mechanically the motors. Each propeller is connected to the motor through the re-duction gears. All the propellers axes of rotation are fixed and parallel. Furthermore, they have fixed-pitch blades andtheir air flows points downwards (to get an upward lift).

The front and the rear propellers rotate counter-clockwise, while the left and the right ones turn clockwise. This con-figuration of opposite pairs directions is successful because it removes the need for a tail rotor which fundamentally destabilizes the vehicle as it has to be constantly change either the speed or the pitch of the tail rotor to balance out the rotational torque.

II. STRUCTURAL DESIGN

A Quad rotor is an aerial vehicle that generates lift with four rotors. Control of the quad copter motion is achieved by altering the pitch and rotation rate(rpm) of one or more rotor discs, thereby changing its torque load and thrust/lift characteristics.

It is an inherently unstable system. There are six degrees of freedom – translational and rotational parameters. These are being controlled by 4 actuating signals. The x and y axis translational motion are coupled with the roll and pitch. Even though the quad rotor has 6 DOF, it is equipped just with four propellers, hence it is not possible to reach a desired set-point for all the DOF, but at maximum four. However, it is quite easy to chose the four best controllable variables and to decouple them to make the control easier.The layout of a quad rotor is shown in the figure.

There are two arms, each having motors at its ends. The motors 1 and 3, which are mounted on the same arm, rotate in the clockwise direction while the motors 2 and 4, mounted on the second arm, rotate in the anti-clockwise arrangement. Both motors at opposite ends of the same arm should rotate in same direction to prevent torque imbalance during linear flight.

The vehicle uses a ‘brain’ so the rotors can communicate with each other. This brain comprises of its sensors, the inertial measurement unit (IMU) that packs the 3-axis Accelerometer and the 3-axis Gyro , that are capable of correcting itself mid-flight, onto a single board.

The four quadrotor targets are thus related to the four basic movements which allow the helicopter to reach a certain height and attitude.

The quad copter follows these basic movements:-

Altitude Motion: The throttle movement is provided by increasing (or decreasing) the speed of all the rotors by the same amount. It leads a vertical force with respect to body-fixed frame which raises or lowers the quad rotor.

Roll Motion: The roll movement is provided by increas-ing (or decreasing) the left rotor’s speed and at the same time decreasing (or increasing) the right rotor’s speed. It leads to a torque with respect to the central axis which makes the quad rotor roll. The overall vertical thrust is the same as in hovering.

Page 31: IOTA 2013

IEEE DTU Student Branch

iota 21

Pitch Motion: The pitch movement is provided by increas-ing (or decreasing) the front rotor’s speed and at the same time decreasing (or increasing) the back rotor’s speed. It leads to a torque with respect to the central axis. The over-all vertical thrust is the same as in hovering.

Yaw Motion: The yaw movement is provided by increas-ing (or decreasing) the front-rear rotors’ speed and at the same time decreasing (or increasing) the left-right couple. It leads to a torque which makes the quad rotor turn in hori-zon level. The overall vertical thrust is the same as in hov-ering.

The sensorsAccelerometer- An accelerometer measures acceleration. A 3-axis accelerometer will tell you the orientation of a sta-tionary platform relative to earth’s surface.Gyro- A gyro measures rate of rotation around a particular axis. If a gyro is used to measure the rate of rotation around the aircraft roll axis, it will measure a non-zero value as long as the aircraft is rolling, but measure zero if the roll stops.

III. CONTROL SYSTEM

In robotics, PID technique represents the basics of control. Even though a lot of different algorithms provide better performance than PID, this last structure is often chosen for simple structure, good performance for several process-es, provides a fast response and tuneable even without a specific model of the controlled system.

The first contribute (P) is proportional to the error and de-fine the proportional bandwidth. Inside this interval the output will be proportional to the errorwhile outside the output will be minimum or maximum. The second contribute (I) varies according to the integral of the error. Even though this component increases the over-shoot and the settling time, it eliminates the steady state error. The third contribute (D) varies according to the derivate of the error. This component helps to decrease the overshoot and the settling time.

Further the sensors used give a lot of noise. Filters need to be used to reduce the noise and thus smoothen their read-ings for efficient error calculation in PID control system.

Accelerometers are right in the long term but wrong (noisy) in the short term. Gyros are right in the short term but wrong (drifting) in the long term. Both are needed-each to calibrate the other.

Complementary Filter- This filter combines readings from both accelerometer and gyro into a clean and stable

angle.

The idea is to use the gyro for short term angle estimates, by numeric integration, but switch to the accelerometer for long-term estimates, by averaging. To do this in a continuous way, low-pass and high-pass filters are used. Kalman FilterThe Kalman filter minimizes the mean square error of the estimated parameters. It is recursive so that new measurements can be processed as they arrive. It is more efficient than complimentary filter but difficult to implement in some cases.

Quadrotor : A Brighter FutureThere are numerous advantages for using quadcopters in developing countries like India. Due to the multi-disciplinary nature of operating a quadcopter, academics from a number of fields need to work together in order to make significant improvements to the way quadcopters perform. Quadcopter projects are collaborations between computer science, electrical engineering and mechanical engineering specialists.

Military and Rescue Operations : Quadcopter unmanned aerial vehicles(UAVs) are used for surveillance by military and law enforcement agencies, as well as search and rescue missions in urban environments. One such example is seen in Uttrakhand rescue operation ‘Daksh’ where the National Disaster Response Force (NDRF) decided to deploy them to locate missing persons. Commercial : Largest use is in the field of aerial imagery. Using on-board cameras, users have the option of being streamed live to the ground. This can thus be used for obstacle avoidance, detecting edges, matching images etc.

Page 32: IOTA 2013

iota22

IEEE DTU Student Branch

An Introduction to Power Distribution using S.C.A.D.A.

Rajat Garg (3rd year, Department of Electrical Engineering)Introduction:- The first and the foremost thing to be un-derstood from a layman’s point of view is that SCADA is neither hardware nor software; it is a process where SCADA stands for Supervisory Control and Data Acqui-sition. In this study the application of the SCADA system in controlling any power distribution network is discussed. Nowadays almost all critical industrial infrastructures and processes are managed remotely from central control rooms by using SCADA. Apart from the management of the electricity network, flow of gas and oil through pipes, the processing and distribution of water, the operation of chemical plants and the signalling network for railways are some other applications of SCADA.

Evolution:- The SCADA system became popular in the 1960’s to monitor and control remote equipment. The early SCADA systems used mainframe technology and required human operators for actions, decisions and maintenance of the information systems. However because of the increased human labour cost, the early SCADA systems were very expensive, but today, with the advent of Programmable Logic Controllers (PLC) SCADA is much more automat-ed and consequently more cost-efficient. SCADA is the foundation for the distributed automation system and over the last two decades its use in the electrical utility compa-nies in India for the smooth monitoring and control of the overall power system has increased manifold. The remote operation of large power system networks which compris-es of generation, transmission and distribution systems started using SCADA in the USA as early as 1962. This has further evolved to include closer coordination between not only the regional but also at the national level. The application of SCADA for industrial automation systems in India started in the late 80s and now it is widely used for remote operation, control and monitoring of industrial au-tomation as well as power distribution systems, yet the use of SCADA in the Delhi power sector started only in the late 90’s and still its application is limited to controlling the transmission network of the power system.

SCADA Architecture:- The RTU (remote telemetry unit), microwave communication network and RCS (remote control server) are the backbone of the SCADA system. The RTU of each power distribution substation gathers operational information of switchgears (using Intelligent Electronic Devices (IEDs) e.g., Multi Function Meters (MFMs)) at the substation and transfers that to the central database through microwave linkage. Basically the RTU

collects all information related to the remote and manu-al operations like circuit breakers, relays, spring charge, oil temperature, etc. The SCADA master or control station comprises of Local Area Network (LAN) of RCS (Remote Communication Server) and workstations. The RCS stores and processes data according to the system requirement and generates necessary commands for remote operation of the substation switchgear. Initially the SCADA software was based on VAX (Virtual Address Extension) and VMS (Virtual Memory System) and OpenVMS (Alpha) plat-forms. However UNIX and Windows platforms are now used for most SCADA software. Also, initially SCADA were used for large utility networks having 100,000 to 250,000 service points, however today, SCADA is used for large automation systems having many service points. The system relies heavily on LANs with communication front end (CFE) processors and user interface (UI) attached lo-cally either on the same LAN or across a WAN (Wide Area Network)

SCADA as a Power Distribution Management System:- SCADA system connects two distinctly different environ-ments, namely the field and the control centre using a com-munication pathway which must have adequate signal to noise ratio and enough bandwidth through dedicated tele-phone lines, optical fibres, power line carrier communica-tion (PLCC’s), etc. To ensure stable connection between the two environments, the substations comprise of current transformers (CT), potential transformers, transducers and intelligent electronic devices (IEDs) where communica-tion between the substation terminus and substation inter-face take place at the RTU. Hence it is a 2-way commu-nication device that keeps updating the status of the field

Page 33: IOTA 2013

IEEE DTU Student Branch

iota 23

continuously and simultaneously executes the commands from the control sensor. It consists of two panels namely the RTU Panel and the Multi Function Meter (MFM) Pan-el. The RTU panel consists of two racks, namely, Basic Rack and Extension Rack.

RTU Panel:- The Basic Rack houses the brain of the RTU. It consists of 9 slots into which a set of ‘cards’ are inserted. These cards function as CPU’s of the RTU. They can be of two types:-a) Serial Line Interface (SLI)The SLI card acts as an interface between the RTU and the IEDs (Intelligent Electronic Devices). It continuously reads data in and out of the IEDs. To thus, communicate it uses four ports A, B, 1, 2. While the ports A and B are of RS485 type while 1 and 2 are of RS232 type.b) Ethernet Cards (ETH)The ETH cards control the process events and commu-nications with the control centers. It continuously reads data from the extension rack and the SLI cards and sends it to the control center. The ETH card has a port ‘E’ which is used by the RTU to communicate to the master. It is connected to the extension rack through ports A or B also called COM A and COM B.The ETH and SLI cards communicate with each other through a communication channel present on the back pane of the basic rack.Extension Rack is a place which is used to house the in-put/output modules of the RTU. It has slots into which the I/O modules can be inserted. It communicates only with the ETH cards of the basic rack and in cases where there is more than one extension rack each communication port of the extension rack is looped with the one succeeding it. The function of the input modules is to send the status of the equipment present in the grid station to the Master Control Centre (MCC) while the function of the output modules is to control the status of the equipment from the MCC. The I/O modules used are:-• DI Cards- 23BE21The DI or digital input cards have 16 channels which can be used for indications. It can take only two states, i.e., ei-ther ON or OFF. It is used for breakers, spring charge, etc.• AI Cards- 23AE21The AI card on the other hand gives the analog value of the signal. It too has 16 channels on which 8 signals can be configured. The input current to a channel in the AI card is 4-20mA dc which is proportional to the range of the analog value. The Battery charger is configured according to the range of the AI card. It is used for winding temperature, oil temperature, tap, current, voltage, etc.• DO Cards- 23BA20It is used to execute commands sent from the MCC. As soon as the DO card gets a command from the MCC, it sends a pulse of 48V dc to the exciting terminals of the contactor. As soon as the contactor gets this pulse it closes

its contacts and the command gets executed.

MFM Panel:- The MFM Panel consists of MFMs. On the Panel cutouts are made pertaining to the size of the MFMs. The MFMs are then inserted into the cutouts and are tight-ly clamped. As mentioned before, the MFM is an IED and it communicates with the MCC through the SLI card. The MFM has 12 terminals to which connections have to be provided. 2 are for auxiliary supply, 4 are for PT second-ary, and 6 are for CT secondary.Apart from these terminals, the MFM has a Communica-ble port and a port to which a hand held programmable and display unit can be connected.The MFM is an IED that can calculate values once the inputs from the secondary of the CTs and PTs have been given. Each MFM is dedicated to a particular panel, be it, outgoing or incoming. The MFM calculates and displays values on a hand held programming and display unit. These values depend on the programmed primary value corresponding to the CT and PT ratio, pertaining to that feeder.

Conclusion:- With an inevitable power crisis threatening our country, it is of utmost importance to prevent wastage and uneconomical practices in our power distribution sys-tem. This requires further fine tuning of our network and management systems with the help of SCADA and its in-finite possibilities. Hence, SCADA is a ‘work in progress in the Indian scenario and requires better infrastructure, logistics and funds. This has been solved to a certain ex-tent with the advent of public private partnerships (PPP), which have been a source of much needed funds, especial-ly in Delhi’s case. However, it is necessary to implement such PPP at not only the urban level but also at the rural and district levels.

Page 34: IOTA 2013

iota24

IEEE DTU Student Branch

Advanced Medical Image Visualization Tools for Biomedical Engineers

Devendra kumar Deshmukh

M.Tech Biomedical engineering

1. Open source Medical Image and Modeling Visual-ization Tool 1.1. DrishtiDrishti stands for vision or insight in Sanskrit, an Indian language. Drishti has been developed keeping in mind the end-use i.e. visualizing tomography data, electron-micros-copy data, etc. Understanding the data set is important and conveying that understanding to the research community or a lay person is equally important. Drishti is aiming for both. The central idea about Drishti is that the scientists should be able to use it for exploring volumetric data sets as well as use it in presentations.The software has been completely rewritten from scratch in the past year. Drishti will be continually upgraded to add new useful features.

1.2. AMIDE- Amide’s a Medical Imaging Data Exam-inerAMIDE is a completely free tool for viewing, analyzing, and registering volumetric medical imaging data sets. It’s been written on top of GTK+, and runs on any system that supports this toolkit (Linux, Windows, Mac OS X, etc.).

1.3. TrackVis TrackVis is a software tool that can visualize and analyze fiber track data from diffusion MR imaging (DTI/DSI/HARDI/Q-Ball) tractography.

1.4. MeVisLabMeVisLab represents a powerful, modular framework for the development of image processing algorithms and vi-sualization and interaction methods, with a special focus on medical imaging. Besides basic image processing and visualization modules, MeVisLab includes advanced med-ical imaging algorithms for segmentation, registration, and

quantitative morphological and functional image analysis.

1.5. 3D Slicer3D Slicer is a free, open source software package for vi-sualization and image analysis. 3D Slicer is natively de-signed to be available on multiple platforms, including Windows, Linux and Mac Os X. Slicer is a community platform created for the purpose of subject specific image analysis and visualization. Multi-modality imaging including, MRI, CT, US, nuclear medicine, and microscopy Multi organ from head to toe Bidirectional interface for devices Expandable and interfaced to multiple toolkits

1.6. SOFAIt is an Open Source framework primarily targeted at re-al-time simulation, with an emphasis on medical simula-tion. It is mostly intended for the research community to help develop newer algorithms, but can also be used as an efficient prototyping tool.

2. Commercially available Medical Image Visualiza-tion, Modeling & simulation Tools 2.1. Mimics It is software that is specially developed by materials for medical image processing. Use Mimics for the segmen-tation of 3D medical images (coming from CT, MRI, mi-croCT, CBCT, Ultrasound, Confocal Microscopy) and the result will be highly accurate 3D models of patient’s anat-omy. You can then use these patient-specific models for a variety of engineering applications directly in Mimics or 3-matic, or export the 3D models and anatomical landmark points to 3rd party software, like statistical, CAD, or FEA packages.

Abstract - Recent advancement of technology in medical and healthcare field brings a new paradigm and approaches to deal the medical image data. The Biomedical equipment such as X-Ray, C-ARM, and MRI, CT scan, ultrasound and CATH Lab are able to generate digital images output such as DICOM, NIIFT etc. For analysis and interpretation of these images we need software tools. Using ITK & VTK toolkit we can per-form image registration, segmentation, visualization etc. These toolkits are used to make open source as well as Commercial software. Here in this article I have enlisted different open source as well as commercially available medical image visualization software. It may help to understand the schema and utilization of this software. These tools are very useful for real-time modeling of human anatomical structures of patient during their clinical treatment, in mean time we can take medical images and perform image processing as well. The tools enlisted are following:

Page 35: IOTA 2013

IEEE DTU Student Branch

iota 25

2.2. BioCAD Biomedical Modeling Inc. translates CT and MRI data into accurate anatomical 3D reconstructions. It also has a li-brary of different anatomical structures, or it can help you acquire suitable data for your project. BioCAD model data can be output in a form to meet your needs: Biomodels - physical replicas and phantomsBioCAD - CAD files in a format suitable for medical de-vice design such as STEP, IGES, sldprt, stl, fea, cfd, obj, etc. CAD-compatible Anatomical Models:BioCAD models are three dimensional NURBS models that are compatible with standard CAD formats. These models can be used for collecting measurements and ex-ecuting computer simulated analysis to improve medical device designs. BioCAD output files may also be used for the production of physical models via additive manufacturing or machin-ing in the case of models with simpler geometries.

2. 3D Doctor3D Doctor is an advanced 3D modeling, image processing and measurement software for MRI, CT, PET, microscopy, scientific, and industrial imaging applications. 3D-DOC-TOR supports both grayscale and color images stored in DICOM, TIFF, Interfile, GIF, JPEG, PNG, BMP, PGM, MRC, RAW and other image file formats. 3D-DOCTOR creates 3D surface models and volume rendering from 2D cross-section images in real time on your PC.

3. A Short description for DICOMDICOM — Digital Imaging and Communications in Med-icine DICOM is the international standard for medical images and related information (ISO 12052). It defines the formats for medical images that can be exchanged with the data and quality necessary for clinical use. DICOM is implement-ed in almost every radiology, cardiology imaging, and ra-diotherapy device (X-ray, CT, MRI, ultrasound, etc.), and increasingly in devices in other medical domains such as ophthalmology and dentistry. With tens of thousands of imaging devices in use, DICOM is one of the most wide-ly deployed healthcare messaging standards in the world. There are literally billions of DICOM images currently in use for clinical care. Since its first publication in 1993, DICOM has revolutionized the practice of radiology, al-lowing the replacement of X-ray film with a fully digital workflow. Much as the Internet has become the platform for new consumer information applications, DICOM has enabled advanced medical imaging applications that have “changed the face of clinical medicine”. From the emer-gency department, to cardiac stress testing, to breast can-cer detection, DICOM is the standard that makes medical imaging work — for doctors and for patients. The last but not the least I want to conclude, this field have

immense scope for medical software tool designing. In ad-dition to this, using these tools, one can perform very com-plex analysis of human anatomical structures. All detailed information about these tools is available online.

Page 36: IOTA 2013

iota26

IEEE DTU Student Branch

A Case for Pharmacogenomics in Management of Cardiac Arrhythmias

Gaurav Kandoi, Anjali Nanda, Vinod Scaria, Sridhar SivasubbuDepartment of Biotechnology, Delhi Technological University

Arrhythmias or disorders of the cardiac rhythm are not un-common in clinical settings and one of the major causes of mortality and morbidity. Atrial Fibrillation is supposed to be rare in young healthy individuals unless without un-derlying cardiac pathology , while prevalent in the elderly and affects roughly around 2-5 Million individuals in the United States alone .Ventricular fibrillation has a smaller incidence of close to 0.4 million . Ventricular tachyarrhythmias contribute sig-nificantly to the morbidity and mortality in patients with underlying coronary artery disease. It has been estimated that close to a half of deaths due to coronary artery disease is caused by ventricular arrhythmias . Apart from the ge-netic and underlyingcardiac disease as causes of cardiac rhythm abnormalities, a number of therapeutic agents, including drugs not direct-ly used in the therapy of cardiac rhythm abnormalities have now been implicated to cause significant prolongation of QT interval and a form of ventricular arrhythmia, torsades de pointes, which is potentially fatal . Recent reports also point to cardiac arrhythmias as one of the top causes for drug withdrawal and failure of clinical trials.No major study on the incidence of arrhythmias or adverse drug reactions to anti-arrhythmic drugs throughout India has been performed. The lack of adequate epidemiological data in this important area has been highlighted in recent publications . According to a report of arrhythmia care in India, published in 2002, the prevalence of patients with arrhythmias in the country is around 2 million . Studies have also pointed to the high prevalence of asymptomatic arrhythmias in elderly patients . According to the reports from the National pharmacovigilance programme, several cases of adverse drug reactions to anti-arrhythmic agents have been reported from people across India. Verapamil and Amiodarone have been reported to cause Steven John-sons syndrome. Atenolol has been similarly reported to

have adverse drug events like fatigue, cough and edema in a study conducted in South India .Similar studies have shown Atenolol to be associated with around 4-5% of total adverse drug reactions reported. Indi-viduals vary widely in their response to therapeutic agents, and a large component of this variability is modulated through the genetic makeup of the individual. Apart from the variability in response, genetic variations are also now known to contribute significantly to Adverse Drug Reac-tions (ADRs). One of the earliest contributions to under-standing ofgenomics of external agents have stemmed from the ob-servations of the British physician Garrod, who proposed that defects in enzymatic pathways in unusual diseases of metabolism could produce unusual sensitivity to chemical agents. Molecular genetic dissection of congenital condi-tions in humans has contributed immensely to the overall understanding of the genetics of heart rhythm. The field has now grown by leaps and bounds with the advent of modern tools and techniques, which enables dissecting genetic phenomena at single base-pair resolution. The ad-vent of genomics technologies has paved the way to deci-phering the molecular genetic mechanisms of variability in response to therapeutic agents. This variability could be caused by genetic variations, which modulate the pharma-cokinetics or pharmacodynamics of the drug. This could involve variations in genes involved in drugtransport/metabolism right up to variations in drug-targets and off-targets. The field of understanding genetic vari-ability in the response to drugs has now emerged into a full-fledged branch of biology - pharmacogenomics with the potential to significantly improve disease management. The field has also offered novel clues towards understand-ing mechanisms and pathways, which involves therapeutic agents. The last couple of decades have seen enormous improve-

Abstract - Disorders of the cardiac rhythm are quite prevalent in clinical practice. Though the variability in drug response between individuals has been extensively studied, this information has not been widely used in clinical practice. Rapid advances in the field of pharmacogenomics have provided us with crucial insights on inter-individual genetic variability and its impact on drug metabolism and action. Technologies for faster and cheaper genetic testing and even personal genome sequencing would enable clinicians to optimize pre-scription based on the genetic makeup of the individual, which would open up new avenues in the area of per-sonalized medicine. We have systematically looked at literature evidence on pharmacogenomics markers for anti-arrhythmic agents from the OpenPGx consortium collection and reason the applicability of genetics in the management of arrhythmia. We also discuss potential issues that need to be resolved before personalized pharmacogenomics becomes a reality in regular clinical practice.

Page 37: IOTA 2013

IEEE DTU Student Branch

iota 27

ments in the management of cardiac arrhythmias. Due to limited benefits and safety related concerns, very few drugs have been successful and have been commonly used in the treatment of arrhythmias. The field has also seen the emergence of newer classes of drugs which function by normalizing the channel activity rather than blocking them. According to the popular Singh Vaughan Williams classification schema, drugs are placed based on the mech-anism of action. The classification scheme has improved over time, presently including the miscellaneous class, which includes drugs which could not fit any of the previ-ous classes. Recent years have seen a number of publica-tions detailing the pharmacogenomics of anti-arrhythmic drugs . Though many classes of anti-arrhythmic agents are not particularly used anymore currently in regular clinical practice except in special settings, the wealth of informa-tion on pharmacogenomics encompasses the commonly used classes of drugs as well.Realizing the dream of personalized medicine is not without challenges and focused intervention. The major challenge in understanding the intricacies of genomic variations and deciphering the potential effects on phar-macokinetics and pharmacodynamics is the lack of com-prehensive models of drug metabolism and action for many drugs. Understanding and charting drug pathways is the first step towards this dream. A systems level under-standing of the drug pathways would enable us to overlay genomic variations and offer smart guesses on drugs that could be involved. The drug pathways for many drugs are complicated, involving multiple and sometimes redundant mechanisms for drug transport, metabolism and targets.Deciphering the pathways is the first step towards under-standing how genetic variation could potentially contrib-ute to the changes in functionality of critical components of the drug pathway. In addition, it also provides crucial insights into the molecular mechanisms of drugdrug and drug-environment interactions and how genetic variations could modulate this phenomenon. The major area that would require focused attention in the immediate future is towards standardized efforts to col-late pharmacogenomics data and evidence to enable me-ta-analysis, while at the same time be able to keep pace with the latest avalanche of evidence brought to light by high throughput genomics studies including Genome-wide associations studies (GWAS). Community led approaches like PharmGKB (www.pharmgkb.org) and crowdsourc-ing approaches like OpenPGx (www.openpgx.org) are the possible way forward, and both approaches should be or-ganized complementary to each other. Apart from the data, the second focus area is computational tools and resources that can handle the high-throughput datasets.The availabil-ity of genome-wide scans as direct-to-consumer services has also provided an immense opportunity and challenge at the same time. With adequate computational tools and resources for interpretation of the data, this has the poten-tial to lower the cost, while at the same time, widen the

general acceptability of genetic testing. No healthcare in-tervention system is complete without adequate education and empowerment of the medical and paramedical profes-sionals and the patients. For the success of widespread ac-ceptability and application of pharmacogenomics testing for cardiac arrhythmias, appropriate focus and emphasis on awareness and healthcare education is essential.These efforts should be complemented and supplement-ed by both systematic ways of collecting data and being able to analyze it to unravel emerging phenomena. This would necessitate creation of effective systems for sys-tematic collection and sharing of clinical data, treatment protocols and outcome measures. This includes setting up of registries, which follow standard protocols, metadata and modes of data exchange. This also requires setting up collaborative and shared data resources and analytical approaches. In summary, seamless exchange of ideas, re-sources and knowhow between research laboratories and clinicians is essential to realize the dream of making phar-macogenomics based personalized medicine a reality.Modeling a disease process or pathway is the next critical step in understanding the molecular mechanisms and the genetic architecture of disease processes. Animal models such as rodents and mammals have been used successful-ly for modeling cardiac arrhythmias. Recently advances include the application of newer model systems for under-standing pharmacogenomics principles. Model organisms like zebrafish, which are easy to maintain and study, have been shown to be useful in modeling pharmacological principles and potential mode of action of many therapeu-tic agents.

Page 38: IOTA 2013

iota28

IEEE DTU Student Branch

Chairperson Rishi PandeyVice-Chairperson Nancy AggarwalGeneral Secretary Abhishek JainTreasurer Kumar SanjogJoint Treasurer & Head Corporate Affairs Kalpit SardaHead, Infrastructure & Logistics Neeraj GuptaHead, External Affairs Fawzan YusuftHead, Research & Web Development Dhruv BalharaHead, HR & PR Megha BoliaHead, Technical Affairs (Hardware) Sanket KumarHead, Technical Affairs (Software) Akhil LohchabHead, Technical Affairs (Software) Pranjali PratikHead, Technical Support (Hardware) Nikhil GuptaHead, Technical Support (Software) Akash ChauhanHead, Design and Graphics Richa ChoudharyHead, Publications Jigyasu JunejaHead, PES Eira KochharSecretary, PES Himanshu JainHead, WIE Prerna SabharwallSecretary, WIE Prerna ModiJoint Secretary, WIE Ankita Saldhi

IEEE -DTU STUDENT COUNCIL 2013-14

Page 39: IOTA 2013

IEEE DTU Student Branch

iota 29

Page 40: IOTA 2013

iota30

IEEE DTU Student Branch