Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
© 2017, IJARCSSE All Rights Reserved Page | 121
Volume 7, Issue 3, March 2017 ISSN: 2277 128X
International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com
An Improved MLFQ Scheduler with Dynamic Time Slice
Using Neural Approach G. Siva Nageswara Rao
Associate Professor, Dept of CSE, K L University,
Guntur, AP, India
S. V. N Srinivas
Professor, Dept of CSE, PNC & KR PG College,
Narasarao Pet, India
DOI: 10.23956/ijarcsse/V7I3/0166
Abstract: Computers in the current times are indispensable to mankind. They perform many tasks in much less time,
when compared to humans, opening up new paths of exploration. One important aspect of computers is multi-tasking,
which enables a system to execute multiple tasks simultaneously. The scheduler causes the processor to perform
context switching among the various processes. It primarily divides the CPU time to various processes, based on a
specific policy related to response time, throughput, and user interactivity. Some of the criteria which are considered
while selecting a good scheduling algorithm include CPU utilization time, Load Average, Response time, Turnaround
time, and throughput. Ideally, one should view this as an optimization problem. Throughput and CPU utilization
should be maximized; where as other factors should be minimized. Occasionally, it requires to put the variance of any
aspect, to a minimum value, rather than minimizing the aspect itself. Consistency of the aspect is preferred over
inconsistency.
Keywords: Dynamic Time quantum, waiting time, turnaround time, Neural Network, Process Scheduling, MLFQ
scheduling.
I. INTRODUCTION
1.1 Scheduling
The process scheduler is an imperative factor of operating systems and it is liable for the drill of CPU time slice
to processes, whereas maintaining scheduling policy to obtain user interactivity, throughput, real time responsiveness,
and more. The most accepted scheduling algorithms that adhere to user interactivity are priority based [1][17][18]. The
fundamental plan is to maintain the CPU busy as much as possible by executing a process until it must wait for an event,
and then switch to another process. Processes exchange among overriding CPU cycles (CPU-burst) and performing I/O
(I/O-burst) [2][19][20]. Scheduling is a difficult choice making action since of inconsistent goals, finite capital and the
complexity in precisely modelling actual world scenarios [13][21][22].
1.2 Scheduling Criteria
There are a number of different criteria to think when trying to select the "best" scheduling algorithm for a
exacting conditions and situation, together with, CPU utilization, Throughput, Turnaround time, Waiting time, Load
average, Response time. In universal one needs to optimize the average importance of a criteria (Maximize CPU
utilization and throughput, and minimize all the others). Conversely from time to time one needs to do somewhat
dissimilar, such as to minimize the maximum response time [2][23][6][7]. Every so often it is most attractive to decrease
the variation of criteria than the real charge [8][9][10][11].
1.3 Choice of scheduling algorithm
When scheduling a commodity operating system, code author has to predict the best algorithm to enforce the
desired task that should be accomplished. The best algorithm has to finish the desired process with intended throughput
or execution time with the combination of several mechanisms. For example windows Vista/XP/NT uses a mixture of
predetermine priority pre-emptive scheduling, a multilevel feedback queue, first come first serve, and round robin.
1.4 MLFQ Scheduling
Multilevel feedback queue (MLFQ) algorithms are extensions to the more general MLQ algorithms. Silberchatz,
Galvin and Gagne [2][3][4]], algorithm dividing the ready queue into numerous break up queues. The processes are
appointed to one queue. Based on appropriate of the process, such as memory size, process priority or process type. Each
queue has its individual scheduling algorithm. The major distinction relies on the fact that, in MLFQ algorithms, jobs are
allowed to move from one queue to another, according to their runtime performance [15][24]. By stirring jobs from
highest priority queues to lowest priority ones, the algorithm guarantees that short jobs and I/O-intensive processes get to
the CPU much earlier. After a quantum of slice jobs are decreased in importance and moved to a lower priority queue
[5].
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 122
II. ARTIFICIAL INTELLIGENCE AND SCHEDULING
Artificial Intelligence (AI) aims at constructing artifacts (machines, programs) that have the ability to study,
adjust and show human-like aptitude. Hence, education algorithms are imperative for realistic applications of AI. The
field of appliance knowledge is the study of methods for programming computers to learn. Research shows that Artificial
neural networks (RNAs) can be used as a appliance knowledge tool to learn the programming process [12] [13]. An ANN
is a data modelling tool that is able to imprison and represent composite and non-linear input / output associations.
Neural networks have numerous applications in scheduling. These are: searching networks (Hopfield net), probabilistic
networks (Boltzmann machine), error-correcting networks (multilayer perceptron), competing networks and self-
organizing networks. Individual of the tradeoffs of using neural networks is that they can be trained for precise use cases
by experts and sophisticated users and then reused by less theoretically inclined users without requiring any knowledge
about neural networks or programmers. The formation of a network is done totally offline. This income that it is not
necessary to have a kernel customized to train the networks. In fact, networks can be trained (as well as tested) on any
other operating system if this is essential. The feature data is extracted from a convention kernel that works and is written
to a file that can be used for training and testing.
2.1 Neural Network in MLFQ
Attractive feature of Neural Network is the capability to learn. This provides a level of flexibility that does not
exist in current scheduling disciplines. Detecting a bad state that leads to bad state and procedure to avoid it is an
example of the helpfulness of this learning capability [14]. Neural networks are the principle AI system to use here
because they can be deployed in dissimilar situations when preparation them. Input and output are continuous numbers,
which are also usually process data accessible. It also allows a variety of search for "brute force" by testing. Individual
can essentially add character still if it is unclear how these characters can be used to compute process priorities[16].. The
network can "figure out" how to do this, if possible. In addition, in case of noise at the entrance, neural networks disgrace
elegantly [3][14][15].
III. THE PROPOSED WORK
MLFQ uses a number of queues with numerous time slices. Static time slice to the queue is control of MLFQ
algorithm. In this paper, we have projected a novel variation of MLFQ algorithm. We suggest algorithm in which the cut
off time is assigned to each queue for the MLFQ programming in such a way that changes with each round to perform
dynamically and the neural network is used to regulate this time division again to optimize the response time. The on the
total performance of algorithm is to be observed to be enhanced by MLFQ using dynamic time slice and neural network
over MLFQ using static time slice for each queue. a lot of context switches possible if time slice is very small and the
algorithm becomes FCFS if it extremely big. So we try to solve this difficulty by building option of lively time slice
suitably, where the time slice are accustomed in all round according to active priority.
3.1 DTQ allocation with Neural approach
Turnaround instance (tat) is the time hole between the direct of submission of process and the immediate of its
conclusion. Neural network is used to optimize turnaround time. Number of the queues and quantum of each queue
concern the turnaround time openly. In this paper, we suggest the algorithm for solving these inconveniences and
optimizing the turnaround time. In this algorithm Neural system has been utilized to discover the minimized quantum of
each queue. Following method can be functional using neural network:
1. Execute the programs with diverse particular time slices which gives lowest turnaround-time.
2. construct the facts base of standing and lively type of the programs from the execution traces obtained in step 1
and train them with the neural algorithm
3. If a fresh program comes, categorize it and run the program with this predicted Time slice.
4. If the novel program occurrence is not in the knowledgebase, go to step 1.
Fig 1: Architecture for Intelligent DTQ allocation.
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 123
Dynamic quantum is considered by calculating Bsp i.e. burst span, calculated by average of initial burst value
of, mid of the queue burst value and last burst value of queue. Also average of priorities and highest priority of queue is
measured to estimate dynamic time slice. Once the dynamic quantum is assigned to the queue, it is fed to the neural
system used to adjust this quantum again to achieve a minimized response time. As shown in Figure 1, the neural
network updates the weights and then changes the quantum of the input queues and specifies a novel quantum for the
queues. We can discover the property of this transformation on the standard answer time, the novel quantities of how
much will be specified to the NN function.
The quantum of queues is fed back to the inputs in a recursive approach, earnings just the original quantum of
an individual queue is fed to the input and the other queues obtain the previous amounts as inputs. After replacing the
original quantum of a particular queue in NN purpose, using pre-assumed default processes used to attain the main
turnaround time, the novel turnaround time caused by this transform is establish. Here, when a alteration is practical in
the quantum of a particular queue, the numeral of queues can be malformed. It is probable that dropping the quantum
caused supplementary processes are moved to the lower queues or a new queue is further to the number of required
queues. By characterizing or recognizing programs it may be likely to make out their previous implementation history
and expect their resource needs. Here Figure 2 shows how to reduce the TaT of programs by using NN procedure. We
determine certain stationary and lively description of a program like input type, input size, text, process size and
uninitialized data info size, taken as features which NN is used to optimize turnaround time. We call dynamic Time
segment as the CPU burst time that minimizes turnaround time.
Fig 2: Design of the Network.
3.2 Parameter Selection
Turnaround time decline rate gradually increases with the key dimension of the program. We believe here,
processes of type computation bound and I/O bound. So here we need to revision the execution performance of numerous
programs with its description that can be used to see coming the dynamic time slice. We can receive agent programs
similar to matrix multiplication, categorization arrays, recursive, random numeral generator programs etc. And we
exercise labelled data about these processes for training.
3.3 Proposed Algorithm
no = whole number of processes
Priority = numeral of priorities
Prhieghst= highest precedence in queue
AVGpriority=Average of priorities
Bsp= Burst span
Input: Number of processes (P1, P2, ……., Pn)
Burst time of processes (Bs1, Bs2, …. Bsn)
Priority of processes (Pri1, Pri2……Prim).
TQ= Time Quantum
Bi=initial burst value of queue,
Bm=mid burst value of queue
Bl=last burst value of queue
Output: TaTAv = Average turnaround time
1. Process arrives in organized queue with coming time and predicted burst time.
2. Sort the processes into queues according to uphill direct of burst time
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 124
3. Assign the priority to process in ascending order.
4. Modernize the about i/p type, i/p size, text, program size and uninitialized data info size. 5. For all queue x=1 to n
repeat the following:
5. 1 compute time quantum for each queue Qx as follows. TQ (Qx)= (Bsp * no) / (AVGpriority * Prhieghst) Where
Bsp= ( (Bi-initial+Bm-mid+Bl-last)/3) compute Average Turnaround time TaTAv
For each job in the that particular queue
Step a. Do steps b, c and d WHILE queue REQUEST becomes free.
Step b. choose the initial process from the ready queue and assign the CPU to it for a time gap of up to 1 point
quantum.
Step c. If the remaining CPU burst time of the currently running process is less than equal half of the time
quantum then allocate CPU again to the currently running process for remaining CPU burst time. After
completion of execution, removed it from the ready queue and go to step a.
Step d. Remove the currently running process from the ready queue REQUEST and put it at the tail of the ready
queue.
5. 2 Using RNN locate the best possible importance of queue according to other queue quantum and the average
turnaround time that is found in the earlier stages.
3.4 Implementation and Experimental Setup
The investigational method is separated into four stages. In the initial stage, we make the data set from the
program's run traces and create the information support with the standing and vibrant features of the programs. In the
second phase, we design and replicate MLFQ scheduler to study actions of MLFQ scheduling in terms of average
turnaround time by using dynamic time slice, in the third phase we use NN to discover more appropriate time slice to get
least amount average turnaround time by as long as it additional information about the process to choose upon time
quantum and in the fourth phase we execute scheduler by neural approach as time quantum will be given to the scheduler
suggested by RNN.
The experimentation is carried with the ready queue with 10 processors
1, 2, 3, 4, 5, 6, 7, 8, 9 10andp p p p p p p p p p with burst time’s 23, 75, 93, 48, 2, 5, 12, 20, 26 and 34 in two
dissimilar cases such as Zero Arrival Time and Non Zero Arrival time. The following tables 1.1 and 1.2 presents the
information regarding process and their burst with zero arrival time and non-zero arrival time and their priorities.
Table: 1.1. Process with Zero Arrival Time
S.NO Process Name Burst Time Priority
1 P1 23 1
2 P2 75 2
3 P3 93 3
4 P4 48 6
5 P5 2 4
6 P6 5 5
7 P7 12 7
8 P8 20 8
9 P9 26 9
10 P10 34 10
Table: 1.2. Process with Non-Zero Arrival Time
S.
NO
Process
Name
Burst
Time
Priorit
y
Arrival
Time
1 P1 23 1 0
2 P2 75 2 10
3 P3 93 3 15
4 P4 48 6 20
5 P5 2 4 21
6 P6 5 5 24
7 P7 12 7 26
8 P8 20 8 28
9 P9 26 9 30
10 P10 34 10 32
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 125
Prediction of Turnaround time (TAT):
Here i am given the burst time of the process and arrival time of processes as an input to LM NN shown in the following
fig. 1.1
Fig. 1.1 Process Input Data
Here i am passing the turnaround time of the each process as output to the LM NN Model shown in the following fig.
1.2
Fig.1.2 Process Outputs
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 126
In the following fig. 1.3 shows the execution process
Fig. 1.3 Neural Network executions
The following fig. 1.4 Show the overall response of process
Fig. 1.4 Overall Performances
The following fig. 1.5 shows the sample input to the LM NN model
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 127
Fig. 1.5 Sample input
The following fig. 1.6 Show the expected turnaround time for the sample input data.
Fig. 1.6 Predicted Outputs
IV. CONCLUSION
When scheduling a commodity operating system, code author has to predict the best algorithm to enforce the
desired task that should be accomplished. The best algorithm has to finish the desired process with intended throughput
or execution time with the combination of several mechanisms. For example windows Vista/XP/NT uses a mixture of
predetermine priority pre-emptive scheduling, a multilevel feedback queue, first come first serve, and round robin. There
is no universal “best” scheduling algorithm, and numerous operating systems use comprehensive or combinations of the
arrangement algorithms.
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 128
Neural networks are the ideal AI practice to use here because they can be deployed in dissimilar situations by
education them. The input and output are incessant numbers, which are also frequently the process data available. It also
allows a variety of search for "brute force" by conducting tests. One can just include characteristics, although it is not yet
clear how this kind can be used to calculate process priorities. The network can "figure out" how to do this, if possible. In
addition, in case of sound at the entrance, neural networks are kindly despoiled. But the drawback of neural networks is
that it is very complicated to get any significance from their organization, because their information is sub-symbolically
encoded. When a system is qualified for a new attribute, it will not give much information about how the characteristic
understandably relates to other characteristics. But, on the other hand, there has been some research with the purpose of
extracting rules from neural networks. This could be useful for creating more minimized rules-based schedulers after
experimenting with the neural network scheduler. When features are introduced by subsystems that are "equally
important" for classification. If this is the case, neural networks will work enhanced than algorithms that look at their
input principles one at a time in a hierarchical way.
V. FUTURE SCOPE
Our potential effort will comprise extending our practice to evaluate performance of scheduler for other kind of
processes. So for that we need to study various parameters of the process affecting turnaround time or response time
whichever is required to minimize according to application. This might be useful for creating extra optimized rule-based
planners following carrying out tests with the Recurrent Neural Network programmer.
REFERENCES
[1] A. Silberschatz, Galvin, and G. Gagne, “Operating System Concepts“, 2009 Wiley Inc.
[2] A. S. Tanenbaum, “Modern Operating Systems” 2009 Prentice Hall(book).
[3] J.R. Quinlan.” Comparing connectionist and symbolic learning methods.”,1994, Volume I: Constraints and
Prospects, pages 445–456. MIT Press, 3.2, 3.5
[4] Al-Husainy, M.A.F.,”Best-job-first CPU scheduling algorithm” 2007, Inform. Technol. J.,Volume 6: Number 2,
Pp: 288-293(pp)
[5] Basney, Jim and Livny, Miron,“Managing Network Resources in Condor”,2000 9th IEEE Proceedings of the
International Symposium on High Performance Distributed Computing, Washington, DC, USA.(pp)
[6] Shlomi Dolev , Avi Mendelson and Igal Shilman,” Semantical Cognitive scheduling”, Nov. 2012 Technical
Report # 13-02 Research in part by Microsoft and (Fronts).(pp)
[7] Rakesh Kumar Yadav, Anurag Upadhayay , “A fresh loom for Multilevel Feedback Queue Scheduling
Algorithm”, July 2012,International Journal of Advances in Engineering Sciences Vol.2, Issue 3.(pp)
[8] Abbas Noon1, Ali Kalakech2, Seifedine Kadry “A New Round Robin Based Scheduling Algorithm for
Operating Systems: Dynamic Quantum Using the Mean Average“ May 2011, IJCSI International Journal of
Computer Science Issues, Vol. 8, Issue 3, No. 1. ISSN (Online): 1694-0814(pp)
[9] Rami J. Matarneh, “Self-Adjustment TQ in RR Algorithm Depending on Burst Time of the Now Running
Processes”,2009, American Journal of AS, Vol 6, No. 10.(pp)
[10] Becchetti, L., Leonardi, S. and Marchetti S.A., “Average-Case and Smoothed Competitive Analysis of the
Multilevel Feedback Algorithm” 2006, Mathematics of Operation Research Vol. 31, No. pp. 85–108.(pp)
[11] Puneet Kumar , Nadeem and M Faridul Haque “Efficient CPU Scheduling Algorithm Using Fuzzy Logic”, 2012
vol. 47, IACSIT Press, Singapore(pp)
[12] Julien Perez1, Balazs Keg “Non-Markovian Reinforcement Learning for Reactive Grid scheduling”, Apr 2011
,Cecile German Renaud -00586504, pp version 1 - 16.(pp)
[13] Gary R.Weckman · Chandrasekhar V. Ganduri · David A. Koonce, “A neural network js scheduler” 2008 ©
Springer Media,19:191–201 DOI 10.1007/s10845-008-0073-9(pp)
[14] Dale Tonogai ,“AI in Operating system : An Expert scheduler”, University of Californiya, Berkeley, 1988,
Defence advanced research projects,1988(thesis)
[15] Paolo Di Francesco ,“Design and implementation of a MLFQ scheduler for the Bacula backup software” 2012 ,
Malarden University , Sweden.(thesis)
[16] Remzi H , Andrea C ,“Operating Systems”, 2011, Four Easy Pieces (V0.4)
[17] siva Nageswara Rao, G., Srinivasu, S.V.N., Srinivasu, N., Ramakoteswara Rao, G.Enhanced precedence
scheduling algorithm with dynamic time quantum EPSADTQ)(2015) Research Journal of Applied Sciences,
Engineering and Technology, 10 (8), pp. 938-941.
[18] Siva Nageswara Rao, G., Srinivasu, N., Sagar, K.V.D., Sai Madhuri, P.Comparison of round robin CPU
scheduling algorithm with various dynamic time quantum (2014) International Journal of Applied Engineering
Research, 9 (18), pp. 4905-4916.
[19] Siva Nageswara Rao, G., Srinivasu, S.V.N., Srinivasu, N., Naga Raju, O.A new proposed Dynamic dual
processor based CPU scheduling algorithm(2015) Journal of Theoretical and Applied Information Technology,
73 (2), pp. 226-231.
[20] Rao, G.S.N., Srinivasu, N., Girish Kumar, K., Abhishek, B.An enhanced dynamic Round Robin CPU
scheduling algorithm(2014) International Journal of Applied Engineering Research, 9 (15), pp. 3085-3098.
Rao et al., International Journal of Advanced Research in Computer Science and Software Engineering 7(3),
March- 2017, pp. 121-129
© 2017, IJARCSSE All Rights Reserved Page | 129
[21] Rao, G.S., Krishna, C.V.P., Rao, K.R.Extreme Programming for service-based application development
architecture(2014) Proceedings of the 2014 Conference on IT in Business, Industry and Government: An
International Conference by CSI on Big Data, CSIBIG 2014, art. no. 7056955, .
[22] Rao, G.S., Krishna, C.V.P., Rao, K.R.Multi Objective Particle Swarm Optimization for Software Cost
Estimation(2014) Advances in Intelligent Systems and Computing, 248 VOLUME I, pp. 125-132.
[23] Rao, G.S., Krishna, C.V.P., Rao, K.R.Rational unified process for service oriented application in extreme
programming (2013) 2013 4th International Conference on Computing, Communications and Networking
Technologies, ICCCNT 2013, art. no. 6726586, .
[24] Daya Sagar, K.V., Sivanageswara Rao, G., Srikanth, T., Raghavendra, K. A relational analytic platform with
Hadoop using the On Demand Integration (ODI) capability (2014) International Journal of Applied Engineering
Research, 9 (13), pp. 2095-2102.