15
Research Article Multiswarm Multiobjective Particle Swarm Optimization with Simulated Annealing for Extracting Multiple Tests Toan Bui, 1 Tram Nguyen, 2,3 Huy M. Huynh, 4 Bay Vo , 1 Jerry Chun-Wei Lin, 5 and Tzung-Pei Hong 6,7 1 Faculty of Information Technology, Ho Chi Minh City University of Technology (HUTECH), Ho Chi Minh, Vietnam 2 Faculty of Information Technology, Nong Lam University, Ho Chi Minh, Vietnam 3 DepartmentofComputerScience,FacultyofElectricalEngineeringandComputerScience,V ˇ SB-TechnicalUniversityofOstrava, Ostrava, Czech Republic 4 Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam 5 Department of Computer Science, Electrical Engineering and Mathematical Sciences, Western Norway University of Applied Sciences, Bergen, Norway 6 Department of Computer Science and Information Engineering, National University of Kaohsiung, Kaohsiung, Taiwan 7 Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan Correspondence should be addressed to Bay Vo; [email protected] Received 5 February 2020; Revised 8 May 2020; Accepted 8 July 2020; Published 1 August 2020 Academic Editor: Kifayat Ullah Khan Copyright © 2020 Toan Bui et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Education is mandatory, and much research has been invested in this sector. An important aspect of education is how to evaluate the learners’ progress. Multiple-choice tests are widely used for this purpose. e tests for learners in the same exam should come in equal difficulties for fair judgment. us, this requirement leads to the problem of generating tests with equal difficulties, which is also known as the specific case of generating tests with a single objective. However, in practice, multiple requirements (objectives) are enforced while making tests. For example, teachers may require the generated tests to have the same difficulty and the same test duration. In this paper, we propose the use of Multiswarm Multiobjective Particle Swarm Optimization (MMPSO) for generating k tests with multiple objectives in a single run. Additionally, we also incorporate Simulated Annealing (SA) to improve the diversity of tests and the accuracy of solutions. e experimental results with various criteria show that our ap- proaches are effective and efficient for the problem of generating multiple tests. 1. Introduction In the education sector, evaluation of students’ study progress is important and mandatory. ere are many methods such as oral tests or writing tests to evaluate their knowledge and understanding about subjects. Because of the scalability and ease of human resources, writing tests are used more widely for the final checkpoints of assess- ment (e.g., final term tests), where a large number of students must be considered. Writing tests can be either descriptive tests, in which students have to fully write their answers, or multiple-choice tests, in which students pick one or more choices for each question. Even though de- scriptive tests are easier to create at first, they consume a great deal of time and effort during the grading stage. Multiple-choice tests, on the other hand, are harder to create at first as they require a large number of questions for security reasons, as in Ting et al. [1]. However, the grading process can be extremely fast, automated by computers, and bias-free from human graders. Recently, many re- searchers have invested their efforts to make computers automate the process of creating multiple-choice tests using available question banks, as in the work of Cheng et al. [2]. e results were shown to be promising and, thus, make multiple-choice tests more feasible for examinations. One of the challenges in generating multiple-choice tests is the difficulty of the candidate tests. e tests for all students should have the same difficulty for fairness. However, it can be Hindawi Scientific Programming Volume 2020, Article ID 7081653, 15 pages https://doi.org/10.1155/2020/7081653

Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

Research ArticleMultiswarm Multiobjective Particle Swarm Optimization withSimulated Annealing for Extracting Multiple Tests

Toan Bui1 Tram Nguyen23 Huy M Huynh4 Bay Vo 1 Jerry Chun-Wei Lin5

and Tzung-Pei Hong67

1Faculty of Information Technology Ho Chi Minh City University of Technology (HUTECH) Ho Chi Minh Vietnam2Faculty of Information Technology Nong Lam University Ho Chi Minh Vietnam3Department of Computer Science Faculty of Electrical Engineering and Computer Science VSB-Technical University of OstravaOstrava Czech Republic4Institute of Research and Development Duy Tan University Da Nang 550000 Vietnam5Department of Computer Science Electrical Engineering and Mathematical SciencesWestern Norway University of Applied Sciences Bergen Norway6Department of Computer Science and Information Engineering National University of Kaohsiung Kaohsiung Taiwan7Department of Computer Science and Engineering National Sun Yat-sen University Kaohsiung Taiwan

Correspondence should be addressed to Bay Vo vdbayhutecheduvn

Received 5 February 2020 Revised 8 May 2020 Accepted 8 July 2020 Published 1 August 2020

Academic Editor Kifayat Ullah Khan

Copyright copy 2020 Toan Bui et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Education is mandatory and much research has been invested in this sector An important aspect of education is how to evaluatethe learnersrsquo progress Multiple-choice tests are widely used for this purpose e tests for learners in the same exam should comein equal difficulties for fair judgment us this requirement leads to the problem of generating tests with equal difficulties whichis also known as the specific case of generating tests with a single objective However in practice multiple requirements(objectives) are enforced while making tests For example teachers may require the generated tests to have the same difficulty andthe same test duration In this paper we propose the use of Multiswarm Multiobjective Particle Swarm Optimization (MMPSO)for generating k tests with multiple objectives in a single run Additionally we also incorporate Simulated Annealing (SA) toimprove the diversity of tests and the accuracy of solutions e experimental results with various criteria show that our ap-proaches are effective and efficient for the problem of generating multiple tests

1 Introduction

In the education sector evaluation of studentsrsquo studyprogress is important and mandatory ere are manymethods such as oral tests or writing tests to evaluate theirknowledge and understanding about subjects Because ofthe scalability and ease of human resources writing testsare used more widely for the final checkpoints of assess-ment (eg final term tests) where a large number ofstudents must be considered Writing tests can be eitherdescriptive tests in which students have to fully write theiranswers or multiple-choice tests in which students pickone or more choices for each question Even though de-scriptive tests are easier to create at first they consume a

great deal of time and effort during the grading stageMultiple-choice tests on the other hand are harder tocreate at first as they require a large number of questions forsecurity reasons as in Ting et al [1] However the gradingprocess can be extremely fast automated by computersand bias-free from human graders Recently many re-searchers have invested their efforts to make computersautomate the process of creating multiple-choice tests usingavailable question banks as in the work of Cheng et al [2]e results were shown to be promising and thus makemultiple-choice tests more feasible for examinations

One of the challenges in generating multiple-choice tests isthe difficulty of the candidate tests e tests for all studentsshould have the same difficulty for fairness However it can be

HindawiScientific ProgrammingVolume 2020 Article ID 7081653 15 pageshttpsdoiorg10115520207081653

seen that generating all tests having the same level of difficultiesis an extremely hard task even in the case of manually choosingquestions from a question bank and the success rate of gen-erating multichoice tests satisfying a given difficulty is low andtime-consuming erefore to speed up the process someauthors chose to automatically generate tests with the use ofcomputers and approximate the difficulties of the requireddifficulties is is also known as generating tests with a singleobjective where the level of difficulty is the objective For ex-ample Bui et al [3] proposed the use of particle swarm opti-mization to generate tests with approximating difficulties to therequired levels fromuserse tests are generated fromquestionbanks that consist of various questions with different difficultiese difficulty value of each question is judged and adaptedbased on users via previous real-life exams e work evaluatesthree random oriented approaches which are Genetic Algo-rithms (GAs) by Yildirim [4 5] and Particle Swarm Optimi-zation (PSO) e experiment result shows that PSO gives thebest performance concerning most of the criteria by Bui et al[3] Previous works only focused on solving a single objective ofthe extracting test based on the difficulty level requirement ofthe user defined In practice exam tests can depend onmultiplefactors such as questionsrsquo duration and total testing time usdesigning a method that can generate tests with multiple ob-jectives is challenging Furthermore the proposed approachescan only extract a single test at each run To extract multipletests the authors have to execute their application multipletimes is method is time-consuming and duplicate tests canoccur because each run is executed separately Besides they donot have any information about each other to avoid duplication

In this paper we propose a new approach that uses Mul-tiswarm Multiobjective Particle Swarm Optimization(MMPSO) to extract k tests in a single run with multiple ob-jectives Multiswarms are the same as the multitest in extractingk tests However they are based on their search on multiplesubswarms instead of one standard swarm that executes theirapplication multiple times to extract multiple tests e use ofdiverse subswarms to increase performance when optimizingtheir tests is studied in Antonio and Chen [6] Additionally weuse Simulated Annealing (SA) to initialize the first populationfor PSO to increase the diversities of generated tests We alsoaim to improve the results on various criteria such as diversitiesof solutions and accuracy

e main contributions of this paper are as follows

(1) We propose a multiswarm multiobjective approachto deal with the problem of extracting k testssimultaneously

(2) We propose the use of SA in combining with PSO forextracting tests SA was selected as it is capable ofescaping local optimum solutions

(3) We propose a parallel version of our serial algo-rithms Using parallelism we can control the overlapof extracted tests to save time

e rest of this paper is organized as follows Section 2describes the related research e problem of extracting ktests from question banks is explained in Section 3

Correlated studies of normal multiswarm multiobjectivePSO and multiswarm multiobjective PSO with SA for theproblem of extracting k tests from question banks arepresented in Sections 4 and 5 e next section analyzes anddiscusses the experimental results of this study Finally thefuture research trends and the conclusions of the paper areprovided in Section 6

2 Related Work

Recently evolutionary algorithms have been applied tomany fields for optimization problems Some of the mostwell-known algorithms are Genetic Algorithms (GAs) andParticle Swarm Optimization (PSO) GAs were inventedbased on the concept of Darwinrsquos theory of evolution andthey seek solutions via progressions of generations Heuristicinformation is used for navigating the search space forpotential individuals and this can achieve globally optimalsolutions Since then there have been many works that usedGAs in practice [7ndash11]

Particle swarm optimization is a swarm-based tech-nique for optimization that is developed by Eberthart andKennedy [12] It imitates the behavior of a school of fishesor the flock of birds PSO optimizes the solutions via themovements of individuals e foundation of PSOrsquosmethod of finding optima is based on the followingprinciples proposed by Eberhart and Kennedy (1) Allindividuals (particles) in swarms tend to find and movetowards possible attractors (2) each individual remembersthe position of the best attractor that it found In particulareach solution is a particle in a swarm and is denoted by twovariables One is the current location denoted by present[]and the other is the particlersquos velocity denoted by v[] eyare two vectors on the vector space Rn in which n changesbased on the problems Additionally each particle has afitness value that is given by a chosen fitness function Atthe beginning of the algorithm the initial generation(population) is created either in a random manner or bysome methods e movement of each particle individual isaffected by two information sources e first is Pbest whichis the best-known position of the particle visited in the pastmovements e second is Gbest which is the best-knownposition of the whole swarm In the original work proposedby Eberhart and Kennedy particles traverse the searchspace by going after the particles with strong fitness valuesParticularly after disjointed periods the velocity and po-sition of each individual are updated with the followingformulas

vt+1

vt

+ c1randlowastPtbest minus presentt

+ c2randlowastGtbest minus presentt

presentt+1 persentt + vt+1

(1)

where rand() is a function that returns a random number inthe range (01) and c1 c2 are constant weights

While PSO is mostly used for the continuous valuedomain recently some works have shown that PSO can alsobe prominently useful for discrete optimization For

2 Scientific Programming

example Sen and Krishnamoorthy [13 14] transformed theoriginal PSO into discrete PSO for solving the problem oftransmitting information on networks e work resultproves that the proposed discrete PSO outperforms Simu-lated Annealing (SA)

To further improve the performance for real-life ap-plications some variants of PSO have been proposed andexploited such as multiswarm PSO Peng et al [15]proposed an approach for multiswarm PSO that pairs thevelocity update of some swarms with different methodssuch as the periodically stochastic learning strategy orrandom mutation learning strategy e experiments havebeen run on a set of specific benchmarks e resultsshowed that the proposed method gives a better quality ofsolutions and has a higher chance of giving correct so-lutions than normal PSO Vafashoar and Meibodi [16]proposed an approach that uses Cellular LearningAutomata (CLA) for multiswarm PSO Each swarm isplaced on a cell of the CLA and each particlersquos velocity isaffected by some other particles e connected particlesare adjusted overtime via periods of learning e resultsindicate that the proposed method is quite effective foroptimization problems on various datasets In order tobalance the search capabilities between swarms Xia et al[17] used multiswarm PSO in combination with variousstrategies such as the dynamic subswarm number sub-swarm regrouping and purposeful detecting Nakisa et al[18] proposed a strategy to improve the speed of con-vergence of multiswarm PSO for robotsrsquo movements in acomplex environment with obstacles Additionally theauthors combine the local search strategy with multi-swarm PSO to prevent the robots from converging at thesame locations when they try to get to their targets

In practice there exist a lot of optimization problemswith multiple objectives instead of a single objective usa lot of work for multiobjective optimization has beenproposed For example Li and Babak [19] proposedmultiobjective PSO combining with an enhanced localsearch ability and parameter-less sharing Kapse andKrishnapillai [20] also proposed an adaptive local searchmethod for multiobjective PSO using the time variancesearch space index to improve the diversity of solutionsand convergence Based on crowded distance sortingCheng et al [21] proposed an improved version circularcrowded sorting and combined with multiobjective PSOe approach scatters the individuals of initial pop-ulations across the search space in order to be better atgathering the Pareto frontier e method was proven toimprove the search capabilities the speed of convergenceand diversity of solutions Similarly Adel et al [22] usedmultiobjective with uniform design instead of traditionalrandom methods to generate the initial population Basedon R2 measurement Alan et al [23] proposed an ap-proach that used R2 as an indicator to navigate swarmsthrough the search space in multiobjective PSO Bycombining utopia point-guided search with multi-objective PSO Kapse and Krishnapillai [24] proposed astrategy that selects the best individuals that are locatednear the utopia points e authors also compared their

method with other algorithms such as NSGA-II (Non-dominated Sorting Genetic Algorithm II) by Deb et al[25] or CMPSO (Coevolutionary multiswarm PSO) byZhan et al [26] on several benchmarks and demonstratedthe proposed methodrsquos effectiveness Saxena and Mishra[27] designed a multiobjective PSO algorithm namedMOPSO tridist e algorithm used triangular distance toselect leader individuals which cover different regions inPareto frontier e authors also included an updatestrategy for Pbest with respect to their connected leadersMOPSO tridist was shown to outperform other multi-objective PSOs and the authors illustrated the algorithmrsquosapplication with the digital watermarking problem forRBG images Based on chaotic particle swarm optimiza-tion Liansong and Dazhi [28] designed a multiobjectiveoptimization for chaotic particle swarm optimization andbased on comprehensive learning particle swarm opti-mization and Xiang and Xueqing [29] proposed an ex-tension the MSCLPSO algorithm and incorporatedvarious techniques from other evolutionary algorithms Inorder to increase the flexibility of multiobjective PSOMokarram and Banan [30] proposed the FC-MOPSOalgorithm that can work on a mix-up of constrainedunconstrained continuous andor discrete optimizationproblems Recently Mohamad et al [31] reviewed andsummarized the disadvantages of multiobjective PSOBased on that they proposed an algorithm M-MOPSOe authors also proposed a strategy based on dynamicsearch boundaries to help escape the local optimaM-MOPSO was proven to be more efficient when com-pared with several state-of-the-art algorithms such asMultiobjective Grey Wolf Optimizer (MOGWO) Multi-objective Evolutionary Algorithm based on Decomposi-tions (MOEAD) and Multiobjective DifferentialEvolution (MODE)

An extension of multiple objective optimization prob-lems is the dynamic multiple objective optimization prob-lems in which each objective would change differentlydepending on the time or environment To deal with thisproblem Liu et al [32] proposed CMPSODMO which isbased on the multiswarm coevolution strategy e authoralso combined it with special boundary constraint pro-cessing and a velocity update strategy to help with the di-versity and convergence speed

To make it easier for readers Table 1 summarizes dif-ferent application domains in which PSO algorithms havebeen applied for different purposes

e abovementioned works can be effective and efficientfor the optimization problems in Table 1 however applyingthem for the problem of generating k test in a single run withmultiple objectives is not feasible according to the work ofNguyen et al [33] erefore in this work we propose anapproach that uses Multiswarm Multiobjective ParticleSwarm Optimization (MMPSO) combined with SimulatedAnnealing (SA) for generating k tests with multiple objec-tives Each swarm in this case is a test candidate and it runson a separate thread e migration happens randomly bychanceWe also aim to improve the accuracy and diversity ofsolutions

Scientific Programming 3

Tabl

e1

Summarizes

different

applicationdo

mains

inwhich

PSO

algorithmshave

been

appliedfordifferent

purposes

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

Academia

and

scientom

etrics

[22]

Particle

swarm

optim

ization

Basedon

multio

bjectiv

efunctio

nswith

unifo

rmdesig

n(M

OPS

O-

UD)

XX

XMultio

bjectiv

ePS

Owith

unifo

rmdesig

ngeneratesthe

initial

popu

latio

ninsteadof

tradition

alrand

ommetho

ds

[23]

R2-based

multimany-ob

jective

particle

swarm

optim

ization

XX

X

eprop

osed

approach

used

R2as

anindicatorto

navigate

swarmsthroughthesearch

spacein

multio

bjectiv

ePS

O

[21]

Anim

proved

version

circular

crow

dedsorting

andcombined

with

multio

bjectiv

ePS

OX

XX

eindividu

alsof

initialpo

pulatio

nsacross

thesearch

space

tobe

bette

rat

gatheringthePa

reto

fron

tier

[20]

Anadaptiv

elocalsearchmetho

dformultio

bjectiv

ePS

OX

XX

Anadaptiv

elocalsearch

metho

dform

ultio

bjectiv

ePSO

using

thetim

evariance

search

spaceindexto

improvethediversity

ofsolutio

nsandconv

ergence

[24]

Com

bining

utop

iapo

int-guided

search

with

multio

bjectiv

ePS

OX

XX

Astrategy

that

selectsthebest

individu

alsthat

arelocated

near

theutop

iapo

ints

[19]

Ano

velM

OPS

Owith

enhanced

localsearchabilityandparameter-

less

sharing

XX

X

eprop

osed

approach

estim

ates

thedensity

oftheparticlesrsquo

neighb

orho

odin

thesearch

spaceInitiallythe

prop

osed

metho

daccurately

determ

ines

thecrow

ding

factor

ofthe

solutio

nsinlaterstagesiteffectiv

elyguides

theentiresw

arm

toconv

erge

closeto

thetrue

Pareto

fron

t

[28]

Chaotic

particle

swarm

optim

ization

XX

X

eworkim

proves

thediversity

ofthepo

pulatio

nanduses

simplified

meshredu

ctionandgene

exchange

toim

provethe

performance

ofthealgorithm

[32]

Acoevolutionary

techniqu

ebased

onmultiswarm

particle

swarm

optim

ization

XX

XX

eauthorscombinedtheirprop

osed

algorithm

with

special

boun

dary

constraint

processin

gandavelocityup

datestrategy

tohelp

with

thediversity

andconv

ergencespeed

[31]

Particle

swarm

optim

ization

algorithm

basedon

dynamic

boun

dary

search

forconstrained

optim

ization

XX

X

eauthorsprop

osed

astrategy

basedon

dynamic

search

boun

daries

tohelp

escape

thelocalo

ptim

a

[30]

AnewPS

O-based

algorithm

(FC-

MOPS

O)

XX

XX

XX

FC-M

OPS

Oalgorithm

canworkon

amix-upof

constrained

unconstrained

continuo

usandor

discretesingle-ob

jective

multio

bjectiv

eop

timizationprob

lemsalgorithm

that

can

workon

amix-upof

constrainedun

constrainedcontinuo

us

andor

discrete

optim

izationprob

lems

[15]

Ano

velp

article

swarm

optim

izationalgorithm

with

multip

lelearning

strategies

(PSO

-MLS

)

XX

XX

eauthorsprop

osed

anapproach

form

ultiswarm

PSO

that

pairsthevelocity

update

ofsomesw

armswith

different

metho

dssuch

astheperiod

ically

stochasticlearning

strategy

orrand

ommutationlearning

strategy

[16]

CellularLearning

Autom

ata

(CLA

)formultiswarm

PSO

XX

XX

Each

swarm

isplaced

onacellof

theCLA

and

each

particlersquos

velocity

isaffectedby

someotherparticles

econn

ected

particlesareadjusted

overtim

eviaperiod

sof

learning

[17]

Improved

particle

swarm

optim

izationalgorithm

basedon

dynamical

topo

logy

and

purposeful

detecting

XX

XIn

orderto

balancethesearch

capabilitiesbetweensw

arms

eextensiveexperimentalresultsillustratetheeffectiv

eness

andeffi

ciency

ofthethree

prop

osed

strategies

used

inMSP

SO

[29]

Particle

swarm

optim

izationwith

differentiale

volutio

n(D

E)strategy

XX

XX

epu

rposeisto

achievehigh

-perform

ance

multio

bjectiv

eop

timization

[26]

Coevolutio

nary

multiswarm

PSO

XX

X

Tomod

ifythevelocity

update

equatio

nto

increase

search

inform

ationanddiversity

solutio

nsto

avoidlocalP

areto

fron

t

eresults

show

superior

performance

insolving

optim

izationprob

lems

4 Scientific Programming

Tabl

e1

Con

tinued

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

App

lication(artificial

intelligence)

[18]

Particle

swarm

optim

izationwith

localsearch

XX

XX

eauthorsprop

osed

astrategy

toim

provethespeedof

conv

ergenceof

multiswarm

PSO

forrobo

tsrsquom

ovem

entsin

acomplex

environm

entwith

obstaclesAdd

ition

allythe

authorscombine

thelocalsearchstrategy

with

multiswarm

PSO

topreventthe

robo

tsfrom

conv

erging

atthesame

locatio

nswhenthey

tryto

gettotheirtargets

[27]

Basedon

new

leader

selection

strategy

toim

proved

particle

swarm

optim

ization

XX

X

ealgorithm

used

triang

ular

distance

toselect

leader

individu

alsthatcover

different

region

sinthePa

reto

fron

tier

eauthorsalso

includ

edan

update

strategy

for

Pbestwith

respecttotheirc

onnected

leadersMOPS

Otridist

was

show

nto

outperform

othermultio

bjectiv

ePS

Osandtheauthors

illustrated

thealgorithmrsquosapplicationwith

thedigital

watermarking

prob

lem

forRB

Gim

ages

[13

14]

Disc

rete

PSO

XX

XFo

rsolvingtheprob

lem

oftransm

ittinginform

ationon

networks

eworkresultproves

that

theprop

osed

discrete

PSO

outperform

sSimulated

Ann

ealin

g(SA)

App

lication

(multicho

ice

questio

ntest

extractio

n)

[2]

Novelapproach

ofparticlesw

arm

optim

ization(PSO

)X

XX

X

edynamic

questio

ngeneratio

nsystem

isbu

iltto

select

tailo

redqu

estio

nsforeach

learnerfrom

theitem

bank

tosatisfy

multip

leassessmentrequ

irem

ents

eexperimental

results

show

that

thePS

Oapproach

issuita

bleforthe

selectionof

near-optim

alqu

estio

nsfrom

large-scaleitem

bank

s

[3]

Particle

swarm

optim

ization

(PSO

)X

XX

XX

eauthorsused

particle

swarm

optim

izationto

generate

testswith

approxim

atingdifficulties

totherequ

ired

levels

from

users

eexperimentresultsho

wsthat

PSO

givesthe

best

performance

concerning

mostof

thecriteria

[33]

Multiswarm

single-ob

jective

particle

swarm

optim

ization

XX

XX

X

eauthorsuseparticle

swarm

optim

izationto

generate

multitests

with

approxim

atingdifficulties

totherequ

ired

levelsfrom

usersIn

thisparallelstagem

igratio

nhapp

ens

betweensw

armsto

exchange

inform

ationbetweenrunn

ing

threadsto

improvetheconv

ergenceanddiversities

ofsolutio

ns

Scientific Programming 5

3 Problem Statement

31 Problem of Generating Multiple Tests In our previousworks [3 33] we have proposed a PSO-based method tomultichoice test generation however it was a single-ob-jective approach In this paper we introduce a multi-objective approach of multichoice test generation bycombining PSO and SA algorithms

Let Q q1 q2 q3 qn be a question bank with nquestions Each question qi isinQ contains four attributes QCSC QD OD QC is a question identifier code and is used toavoid duplication of any question in the solution SC denotesa section code of the question and is used to indicate whichsection the question belonged to QD denotes a time limit ofthe question and OD denotes a real value in the range [0109] that represents an objective difficulty (level) of thequestion QC SC and QD are discrete positive integervalues as in the work of Bui et al [3] and Nguyen et al [33]

e problem of generating multiple k tests (or justmultiple tests) is to generate k number of tests simulta-neously in a single run eg our objective is to generate a setof tests in which each test Ei qi1 qi2 qi3 qim (qij isinQ1le jlem 1le ile k kle n) consists of m (mle n) questionsAdditionally those tests must satisfy both the requirementsof objective difficulty ODR and testing time duration TR thatwere given by users For example ODR 08 and TR 45minutes mean that all the generated tests must have ap-proximately the level of difficulty equal to 08 and the testtime equal to 45 minutes

e objective difficulty of a test Ei is defined asODEi

1113936mj1 qij middot ODm and the duration of the test Ei is

determined by TEi 1113936

mj1 qij middot QD

Besides the aforementioned requirements there areadditional constraints each generated test must satisfy asfollows

C1 each question in a generated test must be unique(ie a question cannot appear more than once in a test)C2 in order to make the test more diverse there existsno case that all questions in a test have the same dif-ficulty value as the required objective difficulty ODRFor example if ODR 06 then exist qki isinTEk

qki middotODne 06C3 some questions in a question bank must stay in thesame groups because their content is relating to eachother e generated tests must ensure that all thequestions in one group appear together is means if aquestion of a specific group appears in a test theremaining questions of the group must also be pre-sented in the same test [3 33]C4 as users may require generated tests to have severalsections a generated test must ensure that the requirednumbers of questions are drawn out from questionbanks for each section

32ModelingMMPSO for the Problem of GeneratingMultipleTests emodel for MMPSO for the problem of generatingmultiple tests can be represented as follows

minf1 x1 x2 x3 xm( 1113857

minf2 x1 x2 x3 xm( 1113857

minfk x1 x2 x3 xm( 1113857

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(2)

where f1 f2 fk are swarms that represent multipletests x1 xm are the number of questions in the test

Assume that F is an objective function for multiobjectiveof the problem it can be formulated as follows

F 1113944 αiFi (3)

where αi is a weight constraint (αi isin (0 1]) and 1113936 αi 1 andFi is a single-objective function In this paper we use anevaluation of the two functions which are the average levelsof difficulty requirements of the tests F1 and total test du-ration F2

F1 f(qkj middot OD) (1113936mj1 qkj middot ODm) minus ODR where

F1 satisfies the conditions C1 C2 C3 C4 andm is the totalnumber of questions in the test qkj middot OD is the difficultyvalue of each question and ODR is the required difficultylevel

F2 f(qkj middot QD) 1 minus (1113936mi1 qkj middot QDTR) where F2

satisfies the conditions C1 C2 C3 C4 and m is the totalnumber of questions in the test qkj middot QD is the duration foreach question and TR is the required total time of tests

e objective function F is used as the fitness function inthe MMPSO and the results of the objective function areconsidered the fitness of the resulting test

In this case the better the fitness the smaller the Fbecomes To improve the quality of the test we also take intoaccount the constraints C1 C2 C3 and C4

For example provided that we have a question bank as inTable 2 the test extraction requirements are four questions adifficulty level of 06 (ODR 06) a total duration of the testof 300 seconds (TR 300) and a weight constraint (α 04)Table 3 illustrates a candidate solution with its fitness 01computed by using formula (3)

4 MMPSO in Extracting Multiple Tests

41 Process of MMPSO for Extracting Tests is paper pro-poses a parallel multiswarmmultiobjective PSO (MMPSO) forextracting multiple tests (MMPSO) based on the idea in Buiet al [3] It can be described as follows Creating an initialswarm population is the first step in PSO in which eachparticle in a swarm is considered a candidate test this firstpopulation also affects the speed of convergence to optimalsolutions is step randomly picks questions in a questionbank e questions either stand-alone or staying in groups(constraint C3) are drawn out for one section (constraint C4)until the required number of questions of the section is reachedand the drawing process is repeated for next sectionsWhen therequired number of questions of the candidate test and all theconstraints are met the fitness value of the generated test willbe computed according to formula (3)

e Gbest and Pbest position information is the containedquestions All Pbest slowly move towards Gbest by using the

6 Scientific Programming

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 2: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

seen that generating all tests having the same level of difficultiesis an extremely hard task even in the case of manually choosingquestions from a question bank and the success rate of gen-erating multichoice tests satisfying a given difficulty is low andtime-consuming erefore to speed up the process someauthors chose to automatically generate tests with the use ofcomputers and approximate the difficulties of the requireddifficulties is is also known as generating tests with a singleobjective where the level of difficulty is the objective For ex-ample Bui et al [3] proposed the use of particle swarm opti-mization to generate tests with approximating difficulties to therequired levels fromuserse tests are generated fromquestionbanks that consist of various questions with different difficultiese difficulty value of each question is judged and adaptedbased on users via previous real-life exams e work evaluatesthree random oriented approaches which are Genetic Algo-rithms (GAs) by Yildirim [4 5] and Particle Swarm Optimi-zation (PSO) e experiment result shows that PSO gives thebest performance concerning most of the criteria by Bui et al[3] Previous works only focused on solving a single objective ofthe extracting test based on the difficulty level requirement ofthe user defined In practice exam tests can depend onmultiplefactors such as questionsrsquo duration and total testing time usdesigning a method that can generate tests with multiple ob-jectives is challenging Furthermore the proposed approachescan only extract a single test at each run To extract multipletests the authors have to execute their application multipletimes is method is time-consuming and duplicate tests canoccur because each run is executed separately Besides they donot have any information about each other to avoid duplication

In this paper we propose a new approach that uses Mul-tiswarm Multiobjective Particle Swarm Optimization(MMPSO) to extract k tests in a single run with multiple ob-jectives Multiswarms are the same as the multitest in extractingk tests However they are based on their search on multiplesubswarms instead of one standard swarm that executes theirapplication multiple times to extract multiple tests e use ofdiverse subswarms to increase performance when optimizingtheir tests is studied in Antonio and Chen [6] Additionally weuse Simulated Annealing (SA) to initialize the first populationfor PSO to increase the diversities of generated tests We alsoaim to improve the results on various criteria such as diversitiesof solutions and accuracy

e main contributions of this paper are as follows

(1) We propose a multiswarm multiobjective approachto deal with the problem of extracting k testssimultaneously

(2) We propose the use of SA in combining with PSO forextracting tests SA was selected as it is capable ofescaping local optimum solutions

(3) We propose a parallel version of our serial algo-rithms Using parallelism we can control the overlapof extracted tests to save time

e rest of this paper is organized as follows Section 2describes the related research e problem of extracting ktests from question banks is explained in Section 3

Correlated studies of normal multiswarm multiobjectivePSO and multiswarm multiobjective PSO with SA for theproblem of extracting k tests from question banks arepresented in Sections 4 and 5 e next section analyzes anddiscusses the experimental results of this study Finally thefuture research trends and the conclusions of the paper areprovided in Section 6

2 Related Work

Recently evolutionary algorithms have been applied tomany fields for optimization problems Some of the mostwell-known algorithms are Genetic Algorithms (GAs) andParticle Swarm Optimization (PSO) GAs were inventedbased on the concept of Darwinrsquos theory of evolution andthey seek solutions via progressions of generations Heuristicinformation is used for navigating the search space forpotential individuals and this can achieve globally optimalsolutions Since then there have been many works that usedGAs in practice [7ndash11]

Particle swarm optimization is a swarm-based tech-nique for optimization that is developed by Eberthart andKennedy [12] It imitates the behavior of a school of fishesor the flock of birds PSO optimizes the solutions via themovements of individuals e foundation of PSOrsquosmethod of finding optima is based on the followingprinciples proposed by Eberhart and Kennedy (1) Allindividuals (particles) in swarms tend to find and movetowards possible attractors (2) each individual remembersthe position of the best attractor that it found In particulareach solution is a particle in a swarm and is denoted by twovariables One is the current location denoted by present[]and the other is the particlersquos velocity denoted by v[] eyare two vectors on the vector space Rn in which n changesbased on the problems Additionally each particle has afitness value that is given by a chosen fitness function Atthe beginning of the algorithm the initial generation(population) is created either in a random manner or bysome methods e movement of each particle individual isaffected by two information sources e first is Pbest whichis the best-known position of the particle visited in the pastmovements e second is Gbest which is the best-knownposition of the whole swarm In the original work proposedby Eberhart and Kennedy particles traverse the searchspace by going after the particles with strong fitness valuesParticularly after disjointed periods the velocity and po-sition of each individual are updated with the followingformulas

vt+1

vt

+ c1randlowastPtbest minus presentt

+ c2randlowastGtbest minus presentt

presentt+1 persentt + vt+1

(1)

where rand() is a function that returns a random number inthe range (01) and c1 c2 are constant weights

While PSO is mostly used for the continuous valuedomain recently some works have shown that PSO can alsobe prominently useful for discrete optimization For

2 Scientific Programming

example Sen and Krishnamoorthy [13 14] transformed theoriginal PSO into discrete PSO for solving the problem oftransmitting information on networks e work resultproves that the proposed discrete PSO outperforms Simu-lated Annealing (SA)

To further improve the performance for real-life ap-plications some variants of PSO have been proposed andexploited such as multiswarm PSO Peng et al [15]proposed an approach for multiswarm PSO that pairs thevelocity update of some swarms with different methodssuch as the periodically stochastic learning strategy orrandom mutation learning strategy e experiments havebeen run on a set of specific benchmarks e resultsshowed that the proposed method gives a better quality ofsolutions and has a higher chance of giving correct so-lutions than normal PSO Vafashoar and Meibodi [16]proposed an approach that uses Cellular LearningAutomata (CLA) for multiswarm PSO Each swarm isplaced on a cell of the CLA and each particlersquos velocity isaffected by some other particles e connected particlesare adjusted overtime via periods of learning e resultsindicate that the proposed method is quite effective foroptimization problems on various datasets In order tobalance the search capabilities between swarms Xia et al[17] used multiswarm PSO in combination with variousstrategies such as the dynamic subswarm number sub-swarm regrouping and purposeful detecting Nakisa et al[18] proposed a strategy to improve the speed of con-vergence of multiswarm PSO for robotsrsquo movements in acomplex environment with obstacles Additionally theauthors combine the local search strategy with multi-swarm PSO to prevent the robots from converging at thesame locations when they try to get to their targets

In practice there exist a lot of optimization problemswith multiple objectives instead of a single objective usa lot of work for multiobjective optimization has beenproposed For example Li and Babak [19] proposedmultiobjective PSO combining with an enhanced localsearch ability and parameter-less sharing Kapse andKrishnapillai [20] also proposed an adaptive local searchmethod for multiobjective PSO using the time variancesearch space index to improve the diversity of solutionsand convergence Based on crowded distance sortingCheng et al [21] proposed an improved version circularcrowded sorting and combined with multiobjective PSOe approach scatters the individuals of initial pop-ulations across the search space in order to be better atgathering the Pareto frontier e method was proven toimprove the search capabilities the speed of convergenceand diversity of solutions Similarly Adel et al [22] usedmultiobjective with uniform design instead of traditionalrandom methods to generate the initial population Basedon R2 measurement Alan et al [23] proposed an ap-proach that used R2 as an indicator to navigate swarmsthrough the search space in multiobjective PSO Bycombining utopia point-guided search with multi-objective PSO Kapse and Krishnapillai [24] proposed astrategy that selects the best individuals that are locatednear the utopia points e authors also compared their

method with other algorithms such as NSGA-II (Non-dominated Sorting Genetic Algorithm II) by Deb et al[25] or CMPSO (Coevolutionary multiswarm PSO) byZhan et al [26] on several benchmarks and demonstratedthe proposed methodrsquos effectiveness Saxena and Mishra[27] designed a multiobjective PSO algorithm namedMOPSO tridist e algorithm used triangular distance toselect leader individuals which cover different regions inPareto frontier e authors also included an updatestrategy for Pbest with respect to their connected leadersMOPSO tridist was shown to outperform other multi-objective PSOs and the authors illustrated the algorithmrsquosapplication with the digital watermarking problem forRBG images Based on chaotic particle swarm optimiza-tion Liansong and Dazhi [28] designed a multiobjectiveoptimization for chaotic particle swarm optimization andbased on comprehensive learning particle swarm opti-mization and Xiang and Xueqing [29] proposed an ex-tension the MSCLPSO algorithm and incorporatedvarious techniques from other evolutionary algorithms Inorder to increase the flexibility of multiobjective PSOMokarram and Banan [30] proposed the FC-MOPSOalgorithm that can work on a mix-up of constrainedunconstrained continuous andor discrete optimizationproblems Recently Mohamad et al [31] reviewed andsummarized the disadvantages of multiobjective PSOBased on that they proposed an algorithm M-MOPSOe authors also proposed a strategy based on dynamicsearch boundaries to help escape the local optimaM-MOPSO was proven to be more efficient when com-pared with several state-of-the-art algorithms such asMultiobjective Grey Wolf Optimizer (MOGWO) Multi-objective Evolutionary Algorithm based on Decomposi-tions (MOEAD) and Multiobjective DifferentialEvolution (MODE)

An extension of multiple objective optimization prob-lems is the dynamic multiple objective optimization prob-lems in which each objective would change differentlydepending on the time or environment To deal with thisproblem Liu et al [32] proposed CMPSODMO which isbased on the multiswarm coevolution strategy e authoralso combined it with special boundary constraint pro-cessing and a velocity update strategy to help with the di-versity and convergence speed

To make it easier for readers Table 1 summarizes dif-ferent application domains in which PSO algorithms havebeen applied for different purposes

e abovementioned works can be effective and efficientfor the optimization problems in Table 1 however applyingthem for the problem of generating k test in a single run withmultiple objectives is not feasible according to the work ofNguyen et al [33] erefore in this work we propose anapproach that uses Multiswarm Multiobjective ParticleSwarm Optimization (MMPSO) combined with SimulatedAnnealing (SA) for generating k tests with multiple objec-tives Each swarm in this case is a test candidate and it runson a separate thread e migration happens randomly bychanceWe also aim to improve the accuracy and diversity ofsolutions

Scientific Programming 3

Tabl

e1

Summarizes

different

applicationdo

mains

inwhich

PSO

algorithmshave

been

appliedfordifferent

purposes

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

Academia

and

scientom

etrics

[22]

Particle

swarm

optim

ization

Basedon

multio

bjectiv

efunctio

nswith

unifo

rmdesig

n(M

OPS

O-

UD)

XX

XMultio

bjectiv

ePS

Owith

unifo

rmdesig

ngeneratesthe

initial

popu

latio

ninsteadof

tradition

alrand

ommetho

ds

[23]

R2-based

multimany-ob

jective

particle

swarm

optim

ization

XX

X

eprop

osed

approach

used

R2as

anindicatorto

navigate

swarmsthroughthesearch

spacein

multio

bjectiv

ePS

O

[21]

Anim

proved

version

circular

crow

dedsorting

andcombined

with

multio

bjectiv

ePS

OX

XX

eindividu

alsof

initialpo

pulatio

nsacross

thesearch

space

tobe

bette

rat

gatheringthePa

reto

fron

tier

[20]

Anadaptiv

elocalsearchmetho

dformultio

bjectiv

ePS

OX

XX

Anadaptiv

elocalsearch

metho

dform

ultio

bjectiv

ePSO

using

thetim

evariance

search

spaceindexto

improvethediversity

ofsolutio

nsandconv

ergence

[24]

Com

bining

utop

iapo

int-guided

search

with

multio

bjectiv

ePS

OX

XX

Astrategy

that

selectsthebest

individu

alsthat

arelocated

near

theutop

iapo

ints

[19]

Ano

velM

OPS

Owith

enhanced

localsearchabilityandparameter-

less

sharing

XX

X

eprop

osed

approach

estim

ates

thedensity

oftheparticlesrsquo

neighb

orho

odin

thesearch

spaceInitiallythe

prop

osed

metho

daccurately

determ

ines

thecrow

ding

factor

ofthe

solutio

nsinlaterstagesiteffectiv

elyguides

theentiresw

arm

toconv

erge

closeto

thetrue

Pareto

fron

t

[28]

Chaotic

particle

swarm

optim

ization

XX

X

eworkim

proves

thediversity

ofthepo

pulatio

nanduses

simplified

meshredu

ctionandgene

exchange

toim

provethe

performance

ofthealgorithm

[32]

Acoevolutionary

techniqu

ebased

onmultiswarm

particle

swarm

optim

ization

XX

XX

eauthorscombinedtheirprop

osed

algorithm

with

special

boun

dary

constraint

processin

gandavelocityup

datestrategy

tohelp

with

thediversity

andconv

ergencespeed

[31]

Particle

swarm

optim

ization

algorithm

basedon

dynamic

boun

dary

search

forconstrained

optim

ization

XX

X

eauthorsprop

osed

astrategy

basedon

dynamic

search

boun

daries

tohelp

escape

thelocalo

ptim

a

[30]

AnewPS

O-based

algorithm

(FC-

MOPS

O)

XX

XX

XX

FC-M

OPS

Oalgorithm

canworkon

amix-upof

constrained

unconstrained

continuo

usandor

discretesingle-ob

jective

multio

bjectiv

eop

timizationprob

lemsalgorithm

that

can

workon

amix-upof

constrainedun

constrainedcontinuo

us

andor

discrete

optim

izationprob

lems

[15]

Ano

velp

article

swarm

optim

izationalgorithm

with

multip

lelearning

strategies

(PSO

-MLS

)

XX

XX

eauthorsprop

osed

anapproach

form

ultiswarm

PSO

that

pairsthevelocity

update

ofsomesw

armswith

different

metho

dssuch

astheperiod

ically

stochasticlearning

strategy

orrand

ommutationlearning

strategy

[16]

CellularLearning

Autom

ata

(CLA

)formultiswarm

PSO

XX

XX

Each

swarm

isplaced

onacellof

theCLA

and

each

particlersquos

velocity

isaffectedby

someotherparticles

econn

ected

particlesareadjusted

overtim

eviaperiod

sof

learning

[17]

Improved

particle

swarm

optim

izationalgorithm

basedon

dynamical

topo

logy

and

purposeful

detecting

XX

XIn

orderto

balancethesearch

capabilitiesbetweensw

arms

eextensiveexperimentalresultsillustratetheeffectiv

eness

andeffi

ciency

ofthethree

prop

osed

strategies

used

inMSP

SO

[29]

Particle

swarm

optim

izationwith

differentiale

volutio

n(D

E)strategy

XX

XX

epu

rposeisto

achievehigh

-perform

ance

multio

bjectiv

eop

timization

[26]

Coevolutio

nary

multiswarm

PSO

XX

X

Tomod

ifythevelocity

update

equatio

nto

increase

search

inform

ationanddiversity

solutio

nsto

avoidlocalP

areto

fron

t

eresults

show

superior

performance

insolving

optim

izationprob

lems

4 Scientific Programming

Tabl

e1

Con

tinued

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

App

lication(artificial

intelligence)

[18]

Particle

swarm

optim

izationwith

localsearch

XX

XX

eauthorsprop

osed

astrategy

toim

provethespeedof

conv

ergenceof

multiswarm

PSO

forrobo

tsrsquom

ovem

entsin

acomplex

environm

entwith

obstaclesAdd

ition

allythe

authorscombine

thelocalsearchstrategy

with

multiswarm

PSO

topreventthe

robo

tsfrom

conv

erging

atthesame

locatio

nswhenthey

tryto

gettotheirtargets

[27]

Basedon

new

leader

selection

strategy

toim

proved

particle

swarm

optim

ization

XX

X

ealgorithm

used

triang

ular

distance

toselect

leader

individu

alsthatcover

different

region

sinthePa

reto

fron

tier

eauthorsalso

includ

edan

update

strategy

for

Pbestwith

respecttotheirc

onnected

leadersMOPS

Otridist

was

show

nto

outperform

othermultio

bjectiv

ePS

Osandtheauthors

illustrated

thealgorithmrsquosapplicationwith

thedigital

watermarking

prob

lem

forRB

Gim

ages

[13

14]

Disc

rete

PSO

XX

XFo

rsolvingtheprob

lem

oftransm

ittinginform

ationon

networks

eworkresultproves

that

theprop

osed

discrete

PSO

outperform

sSimulated

Ann

ealin

g(SA)

App

lication

(multicho

ice

questio

ntest

extractio

n)

[2]

Novelapproach

ofparticlesw

arm

optim

ization(PSO

)X

XX

X

edynamic

questio

ngeneratio

nsystem

isbu

iltto

select

tailo

redqu

estio

nsforeach

learnerfrom

theitem

bank

tosatisfy

multip

leassessmentrequ

irem

ents

eexperimental

results

show

that

thePS

Oapproach

issuita

bleforthe

selectionof

near-optim

alqu

estio

nsfrom

large-scaleitem

bank

s

[3]

Particle

swarm

optim

ization

(PSO

)X

XX

XX

eauthorsused

particle

swarm

optim

izationto

generate

testswith

approxim

atingdifficulties

totherequ

ired

levels

from

users

eexperimentresultsho

wsthat

PSO

givesthe

best

performance

concerning

mostof

thecriteria

[33]

Multiswarm

single-ob

jective

particle

swarm

optim

ization

XX

XX

X

eauthorsuseparticle

swarm

optim

izationto

generate

multitests

with

approxim

atingdifficulties

totherequ

ired

levelsfrom

usersIn

thisparallelstagem

igratio

nhapp

ens

betweensw

armsto

exchange

inform

ationbetweenrunn

ing

threadsto

improvetheconv

ergenceanddiversities

ofsolutio

ns

Scientific Programming 5

3 Problem Statement

31 Problem of Generating Multiple Tests In our previousworks [3 33] we have proposed a PSO-based method tomultichoice test generation however it was a single-ob-jective approach In this paper we introduce a multi-objective approach of multichoice test generation bycombining PSO and SA algorithms

Let Q q1 q2 q3 qn be a question bank with nquestions Each question qi isinQ contains four attributes QCSC QD OD QC is a question identifier code and is used toavoid duplication of any question in the solution SC denotesa section code of the question and is used to indicate whichsection the question belonged to QD denotes a time limit ofthe question and OD denotes a real value in the range [0109] that represents an objective difficulty (level) of thequestion QC SC and QD are discrete positive integervalues as in the work of Bui et al [3] and Nguyen et al [33]

e problem of generating multiple k tests (or justmultiple tests) is to generate k number of tests simulta-neously in a single run eg our objective is to generate a setof tests in which each test Ei qi1 qi2 qi3 qim (qij isinQ1le jlem 1le ile k kle n) consists of m (mle n) questionsAdditionally those tests must satisfy both the requirementsof objective difficulty ODR and testing time duration TR thatwere given by users For example ODR 08 and TR 45minutes mean that all the generated tests must have ap-proximately the level of difficulty equal to 08 and the testtime equal to 45 minutes

e objective difficulty of a test Ei is defined asODEi

1113936mj1 qij middot ODm and the duration of the test Ei is

determined by TEi 1113936

mj1 qij middot QD

Besides the aforementioned requirements there areadditional constraints each generated test must satisfy asfollows

C1 each question in a generated test must be unique(ie a question cannot appear more than once in a test)C2 in order to make the test more diverse there existsno case that all questions in a test have the same dif-ficulty value as the required objective difficulty ODRFor example if ODR 06 then exist qki isinTEk

qki middotODne 06C3 some questions in a question bank must stay in thesame groups because their content is relating to eachother e generated tests must ensure that all thequestions in one group appear together is means if aquestion of a specific group appears in a test theremaining questions of the group must also be pre-sented in the same test [3 33]C4 as users may require generated tests to have severalsections a generated test must ensure that the requirednumbers of questions are drawn out from questionbanks for each section

32ModelingMMPSO for the Problem of GeneratingMultipleTests emodel for MMPSO for the problem of generatingmultiple tests can be represented as follows

minf1 x1 x2 x3 xm( 1113857

minf2 x1 x2 x3 xm( 1113857

minfk x1 x2 x3 xm( 1113857

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(2)

where f1 f2 fk are swarms that represent multipletests x1 xm are the number of questions in the test

Assume that F is an objective function for multiobjectiveof the problem it can be formulated as follows

F 1113944 αiFi (3)

where αi is a weight constraint (αi isin (0 1]) and 1113936 αi 1 andFi is a single-objective function In this paper we use anevaluation of the two functions which are the average levelsof difficulty requirements of the tests F1 and total test du-ration F2

F1 f(qkj middot OD) (1113936mj1 qkj middot ODm) minus ODR where

F1 satisfies the conditions C1 C2 C3 C4 andm is the totalnumber of questions in the test qkj middot OD is the difficultyvalue of each question and ODR is the required difficultylevel

F2 f(qkj middot QD) 1 minus (1113936mi1 qkj middot QDTR) where F2

satisfies the conditions C1 C2 C3 C4 and m is the totalnumber of questions in the test qkj middot QD is the duration foreach question and TR is the required total time of tests

e objective function F is used as the fitness function inthe MMPSO and the results of the objective function areconsidered the fitness of the resulting test

In this case the better the fitness the smaller the Fbecomes To improve the quality of the test we also take intoaccount the constraints C1 C2 C3 and C4

For example provided that we have a question bank as inTable 2 the test extraction requirements are four questions adifficulty level of 06 (ODR 06) a total duration of the testof 300 seconds (TR 300) and a weight constraint (α 04)Table 3 illustrates a candidate solution with its fitness 01computed by using formula (3)

4 MMPSO in Extracting Multiple Tests

41 Process of MMPSO for Extracting Tests is paper pro-poses a parallel multiswarmmultiobjective PSO (MMPSO) forextracting multiple tests (MMPSO) based on the idea in Buiet al [3] It can be described as follows Creating an initialswarm population is the first step in PSO in which eachparticle in a swarm is considered a candidate test this firstpopulation also affects the speed of convergence to optimalsolutions is step randomly picks questions in a questionbank e questions either stand-alone or staying in groups(constraint C3) are drawn out for one section (constraint C4)until the required number of questions of the section is reachedand the drawing process is repeated for next sectionsWhen therequired number of questions of the candidate test and all theconstraints are met the fitness value of the generated test willbe computed according to formula (3)

e Gbest and Pbest position information is the containedquestions All Pbest slowly move towards Gbest by using the

6 Scientific Programming

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 3: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

example Sen and Krishnamoorthy [13 14] transformed theoriginal PSO into discrete PSO for solving the problem oftransmitting information on networks e work resultproves that the proposed discrete PSO outperforms Simu-lated Annealing (SA)

To further improve the performance for real-life ap-plications some variants of PSO have been proposed andexploited such as multiswarm PSO Peng et al [15]proposed an approach for multiswarm PSO that pairs thevelocity update of some swarms with different methodssuch as the periodically stochastic learning strategy orrandom mutation learning strategy e experiments havebeen run on a set of specific benchmarks e resultsshowed that the proposed method gives a better quality ofsolutions and has a higher chance of giving correct so-lutions than normal PSO Vafashoar and Meibodi [16]proposed an approach that uses Cellular LearningAutomata (CLA) for multiswarm PSO Each swarm isplaced on a cell of the CLA and each particlersquos velocity isaffected by some other particles e connected particlesare adjusted overtime via periods of learning e resultsindicate that the proposed method is quite effective foroptimization problems on various datasets In order tobalance the search capabilities between swarms Xia et al[17] used multiswarm PSO in combination with variousstrategies such as the dynamic subswarm number sub-swarm regrouping and purposeful detecting Nakisa et al[18] proposed a strategy to improve the speed of con-vergence of multiswarm PSO for robotsrsquo movements in acomplex environment with obstacles Additionally theauthors combine the local search strategy with multi-swarm PSO to prevent the robots from converging at thesame locations when they try to get to their targets

In practice there exist a lot of optimization problemswith multiple objectives instead of a single objective usa lot of work for multiobjective optimization has beenproposed For example Li and Babak [19] proposedmultiobjective PSO combining with an enhanced localsearch ability and parameter-less sharing Kapse andKrishnapillai [20] also proposed an adaptive local searchmethod for multiobjective PSO using the time variancesearch space index to improve the diversity of solutionsand convergence Based on crowded distance sortingCheng et al [21] proposed an improved version circularcrowded sorting and combined with multiobjective PSOe approach scatters the individuals of initial pop-ulations across the search space in order to be better atgathering the Pareto frontier e method was proven toimprove the search capabilities the speed of convergenceand diversity of solutions Similarly Adel et al [22] usedmultiobjective with uniform design instead of traditionalrandom methods to generate the initial population Basedon R2 measurement Alan et al [23] proposed an ap-proach that used R2 as an indicator to navigate swarmsthrough the search space in multiobjective PSO Bycombining utopia point-guided search with multi-objective PSO Kapse and Krishnapillai [24] proposed astrategy that selects the best individuals that are locatednear the utopia points e authors also compared their

method with other algorithms such as NSGA-II (Non-dominated Sorting Genetic Algorithm II) by Deb et al[25] or CMPSO (Coevolutionary multiswarm PSO) byZhan et al [26] on several benchmarks and demonstratedthe proposed methodrsquos effectiveness Saxena and Mishra[27] designed a multiobjective PSO algorithm namedMOPSO tridist e algorithm used triangular distance toselect leader individuals which cover different regions inPareto frontier e authors also included an updatestrategy for Pbest with respect to their connected leadersMOPSO tridist was shown to outperform other multi-objective PSOs and the authors illustrated the algorithmrsquosapplication with the digital watermarking problem forRBG images Based on chaotic particle swarm optimiza-tion Liansong and Dazhi [28] designed a multiobjectiveoptimization for chaotic particle swarm optimization andbased on comprehensive learning particle swarm opti-mization and Xiang and Xueqing [29] proposed an ex-tension the MSCLPSO algorithm and incorporatedvarious techniques from other evolutionary algorithms Inorder to increase the flexibility of multiobjective PSOMokarram and Banan [30] proposed the FC-MOPSOalgorithm that can work on a mix-up of constrainedunconstrained continuous andor discrete optimizationproblems Recently Mohamad et al [31] reviewed andsummarized the disadvantages of multiobjective PSOBased on that they proposed an algorithm M-MOPSOe authors also proposed a strategy based on dynamicsearch boundaries to help escape the local optimaM-MOPSO was proven to be more efficient when com-pared with several state-of-the-art algorithms such asMultiobjective Grey Wolf Optimizer (MOGWO) Multi-objective Evolutionary Algorithm based on Decomposi-tions (MOEAD) and Multiobjective DifferentialEvolution (MODE)

An extension of multiple objective optimization prob-lems is the dynamic multiple objective optimization prob-lems in which each objective would change differentlydepending on the time or environment To deal with thisproblem Liu et al [32] proposed CMPSODMO which isbased on the multiswarm coevolution strategy e authoralso combined it with special boundary constraint pro-cessing and a velocity update strategy to help with the di-versity and convergence speed

To make it easier for readers Table 1 summarizes dif-ferent application domains in which PSO algorithms havebeen applied for different purposes

e abovementioned works can be effective and efficientfor the optimization problems in Table 1 however applyingthem for the problem of generating k test in a single run withmultiple objectives is not feasible according to the work ofNguyen et al [33] erefore in this work we propose anapproach that uses Multiswarm Multiobjective ParticleSwarm Optimization (MMPSO) combined with SimulatedAnnealing (SA) for generating k tests with multiple objec-tives Each swarm in this case is a test candidate and it runson a separate thread e migration happens randomly bychanceWe also aim to improve the accuracy and diversity ofsolutions

Scientific Programming 3

Tabl

e1

Summarizes

different

applicationdo

mains

inwhich

PSO

algorithmshave

been

appliedfordifferent

purposes

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

Academia

and

scientom

etrics

[22]

Particle

swarm

optim

ization

Basedon

multio

bjectiv

efunctio

nswith

unifo

rmdesig

n(M

OPS

O-

UD)

XX

XMultio

bjectiv

ePS

Owith

unifo

rmdesig

ngeneratesthe

initial

popu

latio

ninsteadof

tradition

alrand

ommetho

ds

[23]

R2-based

multimany-ob

jective

particle

swarm

optim

ization

XX

X

eprop

osed

approach

used

R2as

anindicatorto

navigate

swarmsthroughthesearch

spacein

multio

bjectiv

ePS

O

[21]

Anim

proved

version

circular

crow

dedsorting

andcombined

with

multio

bjectiv

ePS

OX

XX

eindividu

alsof

initialpo

pulatio

nsacross

thesearch

space

tobe

bette

rat

gatheringthePa

reto

fron

tier

[20]

Anadaptiv

elocalsearchmetho

dformultio

bjectiv

ePS

OX

XX

Anadaptiv

elocalsearch

metho

dform

ultio

bjectiv

ePSO

using

thetim

evariance

search

spaceindexto

improvethediversity

ofsolutio

nsandconv

ergence

[24]

Com

bining

utop

iapo

int-guided

search

with

multio

bjectiv

ePS

OX

XX

Astrategy

that

selectsthebest

individu

alsthat

arelocated

near

theutop

iapo

ints

[19]

Ano

velM

OPS

Owith

enhanced

localsearchabilityandparameter-

less

sharing

XX

X

eprop

osed

approach

estim

ates

thedensity

oftheparticlesrsquo

neighb

orho

odin

thesearch

spaceInitiallythe

prop

osed

metho

daccurately

determ

ines

thecrow

ding

factor

ofthe

solutio

nsinlaterstagesiteffectiv

elyguides

theentiresw

arm

toconv

erge

closeto

thetrue

Pareto

fron

t

[28]

Chaotic

particle

swarm

optim

ization

XX

X

eworkim

proves

thediversity

ofthepo

pulatio

nanduses

simplified

meshredu

ctionandgene

exchange

toim

provethe

performance

ofthealgorithm

[32]

Acoevolutionary

techniqu

ebased

onmultiswarm

particle

swarm

optim

ization

XX

XX

eauthorscombinedtheirprop

osed

algorithm

with

special

boun

dary

constraint

processin

gandavelocityup

datestrategy

tohelp

with

thediversity

andconv

ergencespeed

[31]

Particle

swarm

optim

ization

algorithm

basedon

dynamic

boun

dary

search

forconstrained

optim

ization

XX

X

eauthorsprop

osed

astrategy

basedon

dynamic

search

boun

daries

tohelp

escape

thelocalo

ptim

a

[30]

AnewPS

O-based

algorithm

(FC-

MOPS

O)

XX

XX

XX

FC-M

OPS

Oalgorithm

canworkon

amix-upof

constrained

unconstrained

continuo

usandor

discretesingle-ob

jective

multio

bjectiv

eop

timizationprob

lemsalgorithm

that

can

workon

amix-upof

constrainedun

constrainedcontinuo

us

andor

discrete

optim

izationprob

lems

[15]

Ano

velp

article

swarm

optim

izationalgorithm

with

multip

lelearning

strategies

(PSO

-MLS

)

XX

XX

eauthorsprop

osed

anapproach

form

ultiswarm

PSO

that

pairsthevelocity

update

ofsomesw

armswith

different

metho

dssuch

astheperiod

ically

stochasticlearning

strategy

orrand

ommutationlearning

strategy

[16]

CellularLearning

Autom

ata

(CLA

)formultiswarm

PSO

XX

XX

Each

swarm

isplaced

onacellof

theCLA

and

each

particlersquos

velocity

isaffectedby

someotherparticles

econn

ected

particlesareadjusted

overtim

eviaperiod

sof

learning

[17]

Improved

particle

swarm

optim

izationalgorithm

basedon

dynamical

topo

logy

and

purposeful

detecting

XX

XIn

orderto

balancethesearch

capabilitiesbetweensw

arms

eextensiveexperimentalresultsillustratetheeffectiv

eness

andeffi

ciency

ofthethree

prop

osed

strategies

used

inMSP

SO

[29]

Particle

swarm

optim

izationwith

differentiale

volutio

n(D

E)strategy

XX

XX

epu

rposeisto

achievehigh

-perform

ance

multio

bjectiv

eop

timization

[26]

Coevolutio

nary

multiswarm

PSO

XX

X

Tomod

ifythevelocity

update

equatio

nto

increase

search

inform

ationanddiversity

solutio

nsto

avoidlocalP

areto

fron

t

eresults

show

superior

performance

insolving

optim

izationprob

lems

4 Scientific Programming

Tabl

e1

Con

tinued

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

App

lication(artificial

intelligence)

[18]

Particle

swarm

optim

izationwith

localsearch

XX

XX

eauthorsprop

osed

astrategy

toim

provethespeedof

conv

ergenceof

multiswarm

PSO

forrobo

tsrsquom

ovem

entsin

acomplex

environm

entwith

obstaclesAdd

ition

allythe

authorscombine

thelocalsearchstrategy

with

multiswarm

PSO

topreventthe

robo

tsfrom

conv

erging

atthesame

locatio

nswhenthey

tryto

gettotheirtargets

[27]

Basedon

new

leader

selection

strategy

toim

proved

particle

swarm

optim

ization

XX

X

ealgorithm

used

triang

ular

distance

toselect

leader

individu

alsthatcover

different

region

sinthePa

reto

fron

tier

eauthorsalso

includ

edan

update

strategy

for

Pbestwith

respecttotheirc

onnected

leadersMOPS

Otridist

was

show

nto

outperform

othermultio

bjectiv

ePS

Osandtheauthors

illustrated

thealgorithmrsquosapplicationwith

thedigital

watermarking

prob

lem

forRB

Gim

ages

[13

14]

Disc

rete

PSO

XX

XFo

rsolvingtheprob

lem

oftransm

ittinginform

ationon

networks

eworkresultproves

that

theprop

osed

discrete

PSO

outperform

sSimulated

Ann

ealin

g(SA)

App

lication

(multicho

ice

questio

ntest

extractio

n)

[2]

Novelapproach

ofparticlesw

arm

optim

ization(PSO

)X

XX

X

edynamic

questio

ngeneratio

nsystem

isbu

iltto

select

tailo

redqu

estio

nsforeach

learnerfrom

theitem

bank

tosatisfy

multip

leassessmentrequ

irem

ents

eexperimental

results

show

that

thePS

Oapproach

issuita

bleforthe

selectionof

near-optim

alqu

estio

nsfrom

large-scaleitem

bank

s

[3]

Particle

swarm

optim

ization

(PSO

)X

XX

XX

eauthorsused

particle

swarm

optim

izationto

generate

testswith

approxim

atingdifficulties

totherequ

ired

levels

from

users

eexperimentresultsho

wsthat

PSO

givesthe

best

performance

concerning

mostof

thecriteria

[33]

Multiswarm

single-ob

jective

particle

swarm

optim

ization

XX

XX

X

eauthorsuseparticle

swarm

optim

izationto

generate

multitests

with

approxim

atingdifficulties

totherequ

ired

levelsfrom

usersIn

thisparallelstagem

igratio

nhapp

ens

betweensw

armsto

exchange

inform

ationbetweenrunn

ing

threadsto

improvetheconv

ergenceanddiversities

ofsolutio

ns

Scientific Programming 5

3 Problem Statement

31 Problem of Generating Multiple Tests In our previousworks [3 33] we have proposed a PSO-based method tomultichoice test generation however it was a single-ob-jective approach In this paper we introduce a multi-objective approach of multichoice test generation bycombining PSO and SA algorithms

Let Q q1 q2 q3 qn be a question bank with nquestions Each question qi isinQ contains four attributes QCSC QD OD QC is a question identifier code and is used toavoid duplication of any question in the solution SC denotesa section code of the question and is used to indicate whichsection the question belonged to QD denotes a time limit ofthe question and OD denotes a real value in the range [0109] that represents an objective difficulty (level) of thequestion QC SC and QD are discrete positive integervalues as in the work of Bui et al [3] and Nguyen et al [33]

e problem of generating multiple k tests (or justmultiple tests) is to generate k number of tests simulta-neously in a single run eg our objective is to generate a setof tests in which each test Ei qi1 qi2 qi3 qim (qij isinQ1le jlem 1le ile k kle n) consists of m (mle n) questionsAdditionally those tests must satisfy both the requirementsof objective difficulty ODR and testing time duration TR thatwere given by users For example ODR 08 and TR 45minutes mean that all the generated tests must have ap-proximately the level of difficulty equal to 08 and the testtime equal to 45 minutes

e objective difficulty of a test Ei is defined asODEi

1113936mj1 qij middot ODm and the duration of the test Ei is

determined by TEi 1113936

mj1 qij middot QD

Besides the aforementioned requirements there areadditional constraints each generated test must satisfy asfollows

C1 each question in a generated test must be unique(ie a question cannot appear more than once in a test)C2 in order to make the test more diverse there existsno case that all questions in a test have the same dif-ficulty value as the required objective difficulty ODRFor example if ODR 06 then exist qki isinTEk

qki middotODne 06C3 some questions in a question bank must stay in thesame groups because their content is relating to eachother e generated tests must ensure that all thequestions in one group appear together is means if aquestion of a specific group appears in a test theremaining questions of the group must also be pre-sented in the same test [3 33]C4 as users may require generated tests to have severalsections a generated test must ensure that the requirednumbers of questions are drawn out from questionbanks for each section

32ModelingMMPSO for the Problem of GeneratingMultipleTests emodel for MMPSO for the problem of generatingmultiple tests can be represented as follows

minf1 x1 x2 x3 xm( 1113857

minf2 x1 x2 x3 xm( 1113857

minfk x1 x2 x3 xm( 1113857

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(2)

where f1 f2 fk are swarms that represent multipletests x1 xm are the number of questions in the test

Assume that F is an objective function for multiobjectiveof the problem it can be formulated as follows

F 1113944 αiFi (3)

where αi is a weight constraint (αi isin (0 1]) and 1113936 αi 1 andFi is a single-objective function In this paper we use anevaluation of the two functions which are the average levelsof difficulty requirements of the tests F1 and total test du-ration F2

F1 f(qkj middot OD) (1113936mj1 qkj middot ODm) minus ODR where

F1 satisfies the conditions C1 C2 C3 C4 andm is the totalnumber of questions in the test qkj middot OD is the difficultyvalue of each question and ODR is the required difficultylevel

F2 f(qkj middot QD) 1 minus (1113936mi1 qkj middot QDTR) where F2

satisfies the conditions C1 C2 C3 C4 and m is the totalnumber of questions in the test qkj middot QD is the duration foreach question and TR is the required total time of tests

e objective function F is used as the fitness function inthe MMPSO and the results of the objective function areconsidered the fitness of the resulting test

In this case the better the fitness the smaller the Fbecomes To improve the quality of the test we also take intoaccount the constraints C1 C2 C3 and C4

For example provided that we have a question bank as inTable 2 the test extraction requirements are four questions adifficulty level of 06 (ODR 06) a total duration of the testof 300 seconds (TR 300) and a weight constraint (α 04)Table 3 illustrates a candidate solution with its fitness 01computed by using formula (3)

4 MMPSO in Extracting Multiple Tests

41 Process of MMPSO for Extracting Tests is paper pro-poses a parallel multiswarmmultiobjective PSO (MMPSO) forextracting multiple tests (MMPSO) based on the idea in Buiet al [3] It can be described as follows Creating an initialswarm population is the first step in PSO in which eachparticle in a swarm is considered a candidate test this firstpopulation also affects the speed of convergence to optimalsolutions is step randomly picks questions in a questionbank e questions either stand-alone or staying in groups(constraint C3) are drawn out for one section (constraint C4)until the required number of questions of the section is reachedand the drawing process is repeated for next sectionsWhen therequired number of questions of the candidate test and all theconstraints are met the fitness value of the generated test willbe computed according to formula (3)

e Gbest and Pbest position information is the containedquestions All Pbest slowly move towards Gbest by using the

6 Scientific Programming

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 4: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

Tabl

e1

Summarizes

different

applicationdo

mains

inwhich

PSO

algorithmshave

been

appliedfordifferent

purposes

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

Academia

and

scientom

etrics

[22]

Particle

swarm

optim

ization

Basedon

multio

bjectiv

efunctio

nswith

unifo

rmdesig

n(M

OPS

O-

UD)

XX

XMultio

bjectiv

ePS

Owith

unifo

rmdesig

ngeneratesthe

initial

popu

latio

ninsteadof

tradition

alrand

ommetho

ds

[23]

R2-based

multimany-ob

jective

particle

swarm

optim

ization

XX

X

eprop

osed

approach

used

R2as

anindicatorto

navigate

swarmsthroughthesearch

spacein

multio

bjectiv

ePS

O

[21]

Anim

proved

version

circular

crow

dedsorting

andcombined

with

multio

bjectiv

ePS

OX

XX

eindividu

alsof

initialpo

pulatio

nsacross

thesearch

space

tobe

bette

rat

gatheringthePa

reto

fron

tier

[20]

Anadaptiv

elocalsearchmetho

dformultio

bjectiv

ePS

OX

XX

Anadaptiv

elocalsearch

metho

dform

ultio

bjectiv

ePSO

using

thetim

evariance

search

spaceindexto

improvethediversity

ofsolutio

nsandconv

ergence

[24]

Com

bining

utop

iapo

int-guided

search

with

multio

bjectiv

ePS

OX

XX

Astrategy

that

selectsthebest

individu

alsthat

arelocated

near

theutop

iapo

ints

[19]

Ano

velM

OPS

Owith

enhanced

localsearchabilityandparameter-

less

sharing

XX

X

eprop

osed

approach

estim

ates

thedensity

oftheparticlesrsquo

neighb

orho

odin

thesearch

spaceInitiallythe

prop

osed

metho

daccurately

determ

ines

thecrow

ding

factor

ofthe

solutio

nsinlaterstagesiteffectiv

elyguides

theentiresw

arm

toconv

erge

closeto

thetrue

Pareto

fron

t

[28]

Chaotic

particle

swarm

optim

ization

XX

X

eworkim

proves

thediversity

ofthepo

pulatio

nanduses

simplified

meshredu

ctionandgene

exchange

toim

provethe

performance

ofthealgorithm

[32]

Acoevolutionary

techniqu

ebased

onmultiswarm

particle

swarm

optim

ization

XX

XX

eauthorscombinedtheirprop

osed

algorithm

with

special

boun

dary

constraint

processin

gandavelocityup

datestrategy

tohelp

with

thediversity

andconv

ergencespeed

[31]

Particle

swarm

optim

ization

algorithm

basedon

dynamic

boun

dary

search

forconstrained

optim

ization

XX

X

eauthorsprop

osed

astrategy

basedon

dynamic

search

boun

daries

tohelp

escape

thelocalo

ptim

a

[30]

AnewPS

O-based

algorithm

(FC-

MOPS

O)

XX

XX

XX

FC-M

OPS

Oalgorithm

canworkon

amix-upof

constrained

unconstrained

continuo

usandor

discretesingle-ob

jective

multio

bjectiv

eop

timizationprob

lemsalgorithm

that

can

workon

amix-upof

constrainedun

constrainedcontinuo

us

andor

discrete

optim

izationprob

lems

[15]

Ano

velp

article

swarm

optim

izationalgorithm

with

multip

lelearning

strategies

(PSO

-MLS

)

XX

XX

eauthorsprop

osed

anapproach

form

ultiswarm

PSO

that

pairsthevelocity

update

ofsomesw

armswith

different

metho

dssuch

astheperiod

ically

stochasticlearning

strategy

orrand

ommutationlearning

strategy

[16]

CellularLearning

Autom

ata

(CLA

)formultiswarm

PSO

XX

XX

Each

swarm

isplaced

onacellof

theCLA

and

each

particlersquos

velocity

isaffectedby

someotherparticles

econn

ected

particlesareadjusted

overtim

eviaperiod

sof

learning

[17]

Improved

particle

swarm

optim

izationalgorithm

basedon

dynamical

topo

logy

and

purposeful

detecting

XX

XIn

orderto

balancethesearch

capabilitiesbetweensw

arms

eextensiveexperimentalresultsillustratetheeffectiv

eness

andeffi

ciency

ofthethree

prop

osed

strategies

used

inMSP

SO

[29]

Particle

swarm

optim

izationwith

differentiale

volutio

n(D

E)strategy

XX

XX

epu

rposeisto

achievehigh

-perform

ance

multio

bjectiv

eop

timization

[26]

Coevolutio

nary

multiswarm

PSO

XX

X

Tomod

ifythevelocity

update

equatio

nto

increase

search

inform

ationanddiversity

solutio

nsto

avoidlocalP

areto

fron

t

eresults

show

superior

performance

insolving

optim

izationprob

lems

4 Scientific Programming

Tabl

e1

Con

tinued

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

App

lication(artificial

intelligence)

[18]

Particle

swarm

optim

izationwith

localsearch

XX

XX

eauthorsprop

osed

astrategy

toim

provethespeedof

conv

ergenceof

multiswarm

PSO

forrobo

tsrsquom

ovem

entsin

acomplex

environm

entwith

obstaclesAdd

ition

allythe

authorscombine

thelocalsearchstrategy

with

multiswarm

PSO

topreventthe

robo

tsfrom

conv

erging

atthesame

locatio

nswhenthey

tryto

gettotheirtargets

[27]

Basedon

new

leader

selection

strategy

toim

proved

particle

swarm

optim

ization

XX

X

ealgorithm

used

triang

ular

distance

toselect

leader

individu

alsthatcover

different

region

sinthePa

reto

fron

tier

eauthorsalso

includ

edan

update

strategy

for

Pbestwith

respecttotheirc

onnected

leadersMOPS

Otridist

was

show

nto

outperform

othermultio

bjectiv

ePS

Osandtheauthors

illustrated

thealgorithmrsquosapplicationwith

thedigital

watermarking

prob

lem

forRB

Gim

ages

[13

14]

Disc

rete

PSO

XX

XFo

rsolvingtheprob

lem

oftransm

ittinginform

ationon

networks

eworkresultproves

that

theprop

osed

discrete

PSO

outperform

sSimulated

Ann

ealin

g(SA)

App

lication

(multicho

ice

questio

ntest

extractio

n)

[2]

Novelapproach

ofparticlesw

arm

optim

ization(PSO

)X

XX

X

edynamic

questio

ngeneratio

nsystem

isbu

iltto

select

tailo

redqu

estio

nsforeach

learnerfrom

theitem

bank

tosatisfy

multip

leassessmentrequ

irem

ents

eexperimental

results

show

that

thePS

Oapproach

issuita

bleforthe

selectionof

near-optim

alqu

estio

nsfrom

large-scaleitem

bank

s

[3]

Particle

swarm

optim

ization

(PSO

)X

XX

XX

eauthorsused

particle

swarm

optim

izationto

generate

testswith

approxim

atingdifficulties

totherequ

ired

levels

from

users

eexperimentresultsho

wsthat

PSO

givesthe

best

performance

concerning

mostof

thecriteria

[33]

Multiswarm

single-ob

jective

particle

swarm

optim

ization

XX

XX

X

eauthorsuseparticle

swarm

optim

izationto

generate

multitests

with

approxim

atingdifficulties

totherequ

ired

levelsfrom

usersIn

thisparallelstagem

igratio

nhapp

ens

betweensw

armsto

exchange

inform

ationbetweenrunn

ing

threadsto

improvetheconv

ergenceanddiversities

ofsolutio

ns

Scientific Programming 5

3 Problem Statement

31 Problem of Generating Multiple Tests In our previousworks [3 33] we have proposed a PSO-based method tomultichoice test generation however it was a single-ob-jective approach In this paper we introduce a multi-objective approach of multichoice test generation bycombining PSO and SA algorithms

Let Q q1 q2 q3 qn be a question bank with nquestions Each question qi isinQ contains four attributes QCSC QD OD QC is a question identifier code and is used toavoid duplication of any question in the solution SC denotesa section code of the question and is used to indicate whichsection the question belonged to QD denotes a time limit ofthe question and OD denotes a real value in the range [0109] that represents an objective difficulty (level) of thequestion QC SC and QD are discrete positive integervalues as in the work of Bui et al [3] and Nguyen et al [33]

e problem of generating multiple k tests (or justmultiple tests) is to generate k number of tests simulta-neously in a single run eg our objective is to generate a setof tests in which each test Ei qi1 qi2 qi3 qim (qij isinQ1le jlem 1le ile k kle n) consists of m (mle n) questionsAdditionally those tests must satisfy both the requirementsof objective difficulty ODR and testing time duration TR thatwere given by users For example ODR 08 and TR 45minutes mean that all the generated tests must have ap-proximately the level of difficulty equal to 08 and the testtime equal to 45 minutes

e objective difficulty of a test Ei is defined asODEi

1113936mj1 qij middot ODm and the duration of the test Ei is

determined by TEi 1113936

mj1 qij middot QD

Besides the aforementioned requirements there areadditional constraints each generated test must satisfy asfollows

C1 each question in a generated test must be unique(ie a question cannot appear more than once in a test)C2 in order to make the test more diverse there existsno case that all questions in a test have the same dif-ficulty value as the required objective difficulty ODRFor example if ODR 06 then exist qki isinTEk

qki middotODne 06C3 some questions in a question bank must stay in thesame groups because their content is relating to eachother e generated tests must ensure that all thequestions in one group appear together is means if aquestion of a specific group appears in a test theremaining questions of the group must also be pre-sented in the same test [3 33]C4 as users may require generated tests to have severalsections a generated test must ensure that the requirednumbers of questions are drawn out from questionbanks for each section

32ModelingMMPSO for the Problem of GeneratingMultipleTests emodel for MMPSO for the problem of generatingmultiple tests can be represented as follows

minf1 x1 x2 x3 xm( 1113857

minf2 x1 x2 x3 xm( 1113857

minfk x1 x2 x3 xm( 1113857

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(2)

where f1 f2 fk are swarms that represent multipletests x1 xm are the number of questions in the test

Assume that F is an objective function for multiobjectiveof the problem it can be formulated as follows

F 1113944 αiFi (3)

where αi is a weight constraint (αi isin (0 1]) and 1113936 αi 1 andFi is a single-objective function In this paper we use anevaluation of the two functions which are the average levelsof difficulty requirements of the tests F1 and total test du-ration F2

F1 f(qkj middot OD) (1113936mj1 qkj middot ODm) minus ODR where

F1 satisfies the conditions C1 C2 C3 C4 andm is the totalnumber of questions in the test qkj middot OD is the difficultyvalue of each question and ODR is the required difficultylevel

F2 f(qkj middot QD) 1 minus (1113936mi1 qkj middot QDTR) where F2

satisfies the conditions C1 C2 C3 C4 and m is the totalnumber of questions in the test qkj middot QD is the duration foreach question and TR is the required total time of tests

e objective function F is used as the fitness function inthe MMPSO and the results of the objective function areconsidered the fitness of the resulting test

In this case the better the fitness the smaller the Fbecomes To improve the quality of the test we also take intoaccount the constraints C1 C2 C3 and C4

For example provided that we have a question bank as inTable 2 the test extraction requirements are four questions adifficulty level of 06 (ODR 06) a total duration of the testof 300 seconds (TR 300) and a weight constraint (α 04)Table 3 illustrates a candidate solution with its fitness 01computed by using formula (3)

4 MMPSO in Extracting Multiple Tests

41 Process of MMPSO for Extracting Tests is paper pro-poses a parallel multiswarmmultiobjective PSO (MMPSO) forextracting multiple tests (MMPSO) based on the idea in Buiet al [3] It can be described as follows Creating an initialswarm population is the first step in PSO in which eachparticle in a swarm is considered a candidate test this firstpopulation also affects the speed of convergence to optimalsolutions is step randomly picks questions in a questionbank e questions either stand-alone or staying in groups(constraint C3) are drawn out for one section (constraint C4)until the required number of questions of the section is reachedand the drawing process is repeated for next sectionsWhen therequired number of questions of the candidate test and all theconstraints are met the fitness value of the generated test willbe computed according to formula (3)

e Gbest and Pbest position information is the containedquestions All Pbest slowly move towards Gbest by using the

6 Scientific Programming

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 5: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

Tabl

e1

Con

tinued

Categories

References

Algorith

mTy

pesof

optim

izationprob

lems

Mod

ernop

timizationtechniqu

esUncon

strained

Con

strained

Con

tinuo

usDisc

rete

Sing

le-

objective

Multio

bjectiv

eSing

le-

swarm

Multiswarm

App

lication(artificial

intelligence)

[18]

Particle

swarm

optim

izationwith

localsearch

XX

XX

eauthorsprop

osed

astrategy

toim

provethespeedof

conv

ergenceof

multiswarm

PSO

forrobo

tsrsquom

ovem

entsin

acomplex

environm

entwith

obstaclesAdd

ition

allythe

authorscombine

thelocalsearchstrategy

with

multiswarm

PSO

topreventthe

robo

tsfrom

conv

erging

atthesame

locatio

nswhenthey

tryto

gettotheirtargets

[27]

Basedon

new

leader

selection

strategy

toim

proved

particle

swarm

optim

ization

XX

X

ealgorithm

used

triang

ular

distance

toselect

leader

individu

alsthatcover

different

region

sinthePa

reto

fron

tier

eauthorsalso

includ

edan

update

strategy

for

Pbestwith

respecttotheirc

onnected

leadersMOPS

Otridist

was

show

nto

outperform

othermultio

bjectiv

ePS

Osandtheauthors

illustrated

thealgorithmrsquosapplicationwith

thedigital

watermarking

prob

lem

forRB

Gim

ages

[13

14]

Disc

rete

PSO

XX

XFo

rsolvingtheprob

lem

oftransm

ittinginform

ationon

networks

eworkresultproves

that

theprop

osed

discrete

PSO

outperform

sSimulated

Ann

ealin

g(SA)

App

lication

(multicho

ice

questio

ntest

extractio

n)

[2]

Novelapproach

ofparticlesw

arm

optim

ization(PSO

)X

XX

X

edynamic

questio

ngeneratio

nsystem

isbu

iltto

select

tailo

redqu

estio

nsforeach

learnerfrom

theitem

bank

tosatisfy

multip

leassessmentrequ

irem

ents

eexperimental

results

show

that

thePS

Oapproach

issuita

bleforthe

selectionof

near-optim

alqu

estio

nsfrom

large-scaleitem

bank

s

[3]

Particle

swarm

optim

ization

(PSO

)X

XX

XX

eauthorsused

particle

swarm

optim

izationto

generate

testswith

approxim

atingdifficulties

totherequ

ired

levels

from

users

eexperimentresultsho

wsthat

PSO

givesthe

best

performance

concerning

mostof

thecriteria

[33]

Multiswarm

single-ob

jective

particle

swarm

optim

ization

XX

XX

X

eauthorsuseparticle

swarm

optim

izationto

generate

multitests

with

approxim

atingdifficulties

totherequ

ired

levelsfrom

usersIn

thisparallelstagem

igratio

nhapp

ens

betweensw

armsto

exchange

inform

ationbetweenrunn

ing

threadsto

improvetheconv

ergenceanddiversities

ofsolutio

ns

Scientific Programming 5

3 Problem Statement

31 Problem of Generating Multiple Tests In our previousworks [3 33] we have proposed a PSO-based method tomultichoice test generation however it was a single-ob-jective approach In this paper we introduce a multi-objective approach of multichoice test generation bycombining PSO and SA algorithms

Let Q q1 q2 q3 qn be a question bank with nquestions Each question qi isinQ contains four attributes QCSC QD OD QC is a question identifier code and is used toavoid duplication of any question in the solution SC denotesa section code of the question and is used to indicate whichsection the question belonged to QD denotes a time limit ofthe question and OD denotes a real value in the range [0109] that represents an objective difficulty (level) of thequestion QC SC and QD are discrete positive integervalues as in the work of Bui et al [3] and Nguyen et al [33]

e problem of generating multiple k tests (or justmultiple tests) is to generate k number of tests simulta-neously in a single run eg our objective is to generate a setof tests in which each test Ei qi1 qi2 qi3 qim (qij isinQ1le jlem 1le ile k kle n) consists of m (mle n) questionsAdditionally those tests must satisfy both the requirementsof objective difficulty ODR and testing time duration TR thatwere given by users For example ODR 08 and TR 45minutes mean that all the generated tests must have ap-proximately the level of difficulty equal to 08 and the testtime equal to 45 minutes

e objective difficulty of a test Ei is defined asODEi

1113936mj1 qij middot ODm and the duration of the test Ei is

determined by TEi 1113936

mj1 qij middot QD

Besides the aforementioned requirements there areadditional constraints each generated test must satisfy asfollows

C1 each question in a generated test must be unique(ie a question cannot appear more than once in a test)C2 in order to make the test more diverse there existsno case that all questions in a test have the same dif-ficulty value as the required objective difficulty ODRFor example if ODR 06 then exist qki isinTEk

qki middotODne 06C3 some questions in a question bank must stay in thesame groups because their content is relating to eachother e generated tests must ensure that all thequestions in one group appear together is means if aquestion of a specific group appears in a test theremaining questions of the group must also be pre-sented in the same test [3 33]C4 as users may require generated tests to have severalsections a generated test must ensure that the requirednumbers of questions are drawn out from questionbanks for each section

32ModelingMMPSO for the Problem of GeneratingMultipleTests emodel for MMPSO for the problem of generatingmultiple tests can be represented as follows

minf1 x1 x2 x3 xm( 1113857

minf2 x1 x2 x3 xm( 1113857

minfk x1 x2 x3 xm( 1113857

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(2)

where f1 f2 fk are swarms that represent multipletests x1 xm are the number of questions in the test

Assume that F is an objective function for multiobjectiveof the problem it can be formulated as follows

F 1113944 αiFi (3)

where αi is a weight constraint (αi isin (0 1]) and 1113936 αi 1 andFi is a single-objective function In this paper we use anevaluation of the two functions which are the average levelsof difficulty requirements of the tests F1 and total test du-ration F2

F1 f(qkj middot OD) (1113936mj1 qkj middot ODm) minus ODR where

F1 satisfies the conditions C1 C2 C3 C4 andm is the totalnumber of questions in the test qkj middot OD is the difficultyvalue of each question and ODR is the required difficultylevel

F2 f(qkj middot QD) 1 minus (1113936mi1 qkj middot QDTR) where F2

satisfies the conditions C1 C2 C3 C4 and m is the totalnumber of questions in the test qkj middot QD is the duration foreach question and TR is the required total time of tests

e objective function F is used as the fitness function inthe MMPSO and the results of the objective function areconsidered the fitness of the resulting test

In this case the better the fitness the smaller the Fbecomes To improve the quality of the test we also take intoaccount the constraints C1 C2 C3 and C4

For example provided that we have a question bank as inTable 2 the test extraction requirements are four questions adifficulty level of 06 (ODR 06) a total duration of the testof 300 seconds (TR 300) and a weight constraint (α 04)Table 3 illustrates a candidate solution with its fitness 01computed by using formula (3)

4 MMPSO in Extracting Multiple Tests

41 Process of MMPSO for Extracting Tests is paper pro-poses a parallel multiswarmmultiobjective PSO (MMPSO) forextracting multiple tests (MMPSO) based on the idea in Buiet al [3] It can be described as follows Creating an initialswarm population is the first step in PSO in which eachparticle in a swarm is considered a candidate test this firstpopulation also affects the speed of convergence to optimalsolutions is step randomly picks questions in a questionbank e questions either stand-alone or staying in groups(constraint C3) are drawn out for one section (constraint C4)until the required number of questions of the section is reachedand the drawing process is repeated for next sectionsWhen therequired number of questions of the candidate test and all theconstraints are met the fitness value of the generated test willbe computed according to formula (3)

e Gbest and Pbest position information is the containedquestions All Pbest slowly move towards Gbest by using the

6 Scientific Programming

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 6: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

3 Problem Statement

31 Problem of Generating Multiple Tests In our previousworks [3 33] we have proposed a PSO-based method tomultichoice test generation however it was a single-ob-jective approach In this paper we introduce a multi-objective approach of multichoice test generation bycombining PSO and SA algorithms

Let Q q1 q2 q3 qn be a question bank with nquestions Each question qi isinQ contains four attributes QCSC QD OD QC is a question identifier code and is used toavoid duplication of any question in the solution SC denotesa section code of the question and is used to indicate whichsection the question belonged to QD denotes a time limit ofthe question and OD denotes a real value in the range [0109] that represents an objective difficulty (level) of thequestion QC SC and QD are discrete positive integervalues as in the work of Bui et al [3] and Nguyen et al [33]

e problem of generating multiple k tests (or justmultiple tests) is to generate k number of tests simulta-neously in a single run eg our objective is to generate a setof tests in which each test Ei qi1 qi2 qi3 qim (qij isinQ1le jlem 1le ile k kle n) consists of m (mle n) questionsAdditionally those tests must satisfy both the requirementsof objective difficulty ODR and testing time duration TR thatwere given by users For example ODR 08 and TR 45minutes mean that all the generated tests must have ap-proximately the level of difficulty equal to 08 and the testtime equal to 45 minutes

e objective difficulty of a test Ei is defined asODEi

1113936mj1 qij middot ODm and the duration of the test Ei is

determined by TEi 1113936

mj1 qij middot QD

Besides the aforementioned requirements there areadditional constraints each generated test must satisfy asfollows

C1 each question in a generated test must be unique(ie a question cannot appear more than once in a test)C2 in order to make the test more diverse there existsno case that all questions in a test have the same dif-ficulty value as the required objective difficulty ODRFor example if ODR 06 then exist qki isinTEk

qki middotODne 06C3 some questions in a question bank must stay in thesame groups because their content is relating to eachother e generated tests must ensure that all thequestions in one group appear together is means if aquestion of a specific group appears in a test theremaining questions of the group must also be pre-sented in the same test [3 33]C4 as users may require generated tests to have severalsections a generated test must ensure that the requirednumbers of questions are drawn out from questionbanks for each section

32ModelingMMPSO for the Problem of GeneratingMultipleTests emodel for MMPSO for the problem of generatingmultiple tests can be represented as follows

minf1 x1 x2 x3 xm( 1113857

minf2 x1 x2 x3 xm( 1113857

minfk x1 x2 x3 xm( 1113857

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(2)

where f1 f2 fk are swarms that represent multipletests x1 xm are the number of questions in the test

Assume that F is an objective function for multiobjectiveof the problem it can be formulated as follows

F 1113944 αiFi (3)

where αi is a weight constraint (αi isin (0 1]) and 1113936 αi 1 andFi is a single-objective function In this paper we use anevaluation of the two functions which are the average levelsof difficulty requirements of the tests F1 and total test du-ration F2

F1 f(qkj middot OD) (1113936mj1 qkj middot ODm) minus ODR where

F1 satisfies the conditions C1 C2 C3 C4 andm is the totalnumber of questions in the test qkj middot OD is the difficultyvalue of each question and ODR is the required difficultylevel

F2 f(qkj middot QD) 1 minus (1113936mi1 qkj middot QDTR) where F2

satisfies the conditions C1 C2 C3 C4 and m is the totalnumber of questions in the test qkj middot QD is the duration foreach question and TR is the required total time of tests

e objective function F is used as the fitness function inthe MMPSO and the results of the objective function areconsidered the fitness of the resulting test

In this case the better the fitness the smaller the Fbecomes To improve the quality of the test we also take intoaccount the constraints C1 C2 C3 and C4

For example provided that we have a question bank as inTable 2 the test extraction requirements are four questions adifficulty level of 06 (ODR 06) a total duration of the testof 300 seconds (TR 300) and a weight constraint (α 04)Table 3 illustrates a candidate solution with its fitness 01computed by using formula (3)

4 MMPSO in Extracting Multiple Tests

41 Process of MMPSO for Extracting Tests is paper pro-poses a parallel multiswarmmultiobjective PSO (MMPSO) forextracting multiple tests (MMPSO) based on the idea in Buiet al [3] It can be described as follows Creating an initialswarm population is the first step in PSO in which eachparticle in a swarm is considered a candidate test this firstpopulation also affects the speed of convergence to optimalsolutions is step randomly picks questions in a questionbank e questions either stand-alone or staying in groups(constraint C3) are drawn out for one section (constraint C4)until the required number of questions of the section is reachedand the drawing process is repeated for next sectionsWhen therequired number of questions of the candidate test and all theconstraints are met the fitness value of the generated test willbe computed according to formula (3)

e Gbest and Pbest position information is the containedquestions All Pbest slowly move towards Gbest by using the

6 Scientific Programming

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 7: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

location information of Gbest e movement is the re-placement of some questions in the candidate test accordingto the velocity Pbest If the fitness value of a newly found Pbestof a particle is smaller than the particlersquos currently best-known Pbest (ie the new position is better than the old)then we assign a newly found position value to Pbest

Gbest moves towards the final optimal solution in ran-dom directions e movement is achieved by replacing itscontent with some random questions from the questionbank In a similar way to Pbest if the new position is no betterthan the old one the Gbest value will not be updated

e algorithm ends when the fitness value is lower thanthe fitness threshold ε or the number of movements (iter-ation loops) surpasses the loop threshold λ Both of thethresholds are given by users

42 Migration Parallel MMPSO for the Extracting Test(Parallel MMPSO) Based on the idea in Nguyen et al [33]we present the migration parallel approach of MMPSO forincreasing performance Each swarm now corresponds to athread and the migration happens by chance betweenswarms e migration method starts with locking thecurrent thread (swarm) to avoid interference from otherthreads in

In the dual-sector model [34] Lewis describes a rela-tionship between two regions the subsistence sector andthe capitalist sector We can view the two types of economicsectors here as the strong (capitalist) sectors and the weak(subsistence) sectors (while ignoring other related aspectsof the economy) Whether a sector is strong or weak de-pends on the fitness value of Gbest positions of its swarmHowever when applying those theories some adjustmentsare made so that the parallel MMPSO can yield betteroptimal solutions

e direction of migration changes when individualswith strong Pbest (strong individuals) in strong sectors moveto weak sectors e weak sectorsrsquo Gbest may be replaced bythe incoming Pbest and the fitness value of the weak swarmsshould make a large lift as in the work of Nguyen et al [33]

Backward migration from the weak swarms to strongswarms also happens alongside forwarding migration Forevery individual that moves from a strong swarm to a weakswarm there is always one that moves from the weak swarmback to the strong swarm is is to ensure that the numberof particles and the searching capabilities of the swarms donot significantly decrease

e foremost condition for migration to happen is thatthere are changes in the fitness values of the current Gbestcompared to the previous Gbest

e probability for migration is denoted as c and theunit is a percentage ()

e number of migrating particles is equal to δ times the sizeof the swarm (ie the number of existing particles in theswarm) where δ denotes the percentage of migration

e migration parallel MMPSO-based approach to ex-tract multiple tests is described in a form of a pseudocode inAlgorithm 1

e particle updates its velocity (V) and positions (P)with the following formulas

Vt+1

Vt

+ r1 times Vtpbest + r2 times V

tgbest (4)

where Vpbest is the velocity of Pbest with Vpbest determinedby Vpbest α times m Vgbest is the velocity of Gbest with Vgbestdetermined by Vgbest β times m α β isin (0 1) r1 r2 are ran-dom values and m is the number of questions in the testsolutions

Pt+1

Pt

+ Vt+1

(5)

e process of generating multiple tests at the same timein a single run using migration parallel MMPSO includestwo stages e first stage is generating tests using multi-objective PSO In this stage the algorithm proceeds to findtests that satisfy all requirements and constraints usingmultiple threads Each thread corresponds to each swarmthat runs separately e second stage is improving anddiversifying tests is stage happens when there is a changein the value of Gbest of each swarm (for each thread) in thefirst stage In this second stage migration happens betweenswarms to exchange information between running threadsto improve the convergence and diversity of solutions basedon the work of Nguyen et al [33] e complete flowchartthat applies the parallelized migration method to theMMPSO algorithm is shown in Figure 1

43 Migration Parallel MMPSO in Combination with Simu-latedAnnealing As mentioned above the initial populationaffects the convergence speed and diversity of test solutionse creation of a set of initial solutions (population) isgenerally performed randomly in PSO It is one of thedrawbacks since the search space is too wide so theprobability of getting stuck in a local optimum solution is

Table 3 An example of test results

An individual that satisfies the requirementQC 05 08 01 04 Fitness (1) (ODR 06 TR 300 α 04)OD 04 08 03 07

(04 times |(224) minus 06|) + [(1 minus 04) times |1 minus (240300)|] 01QD 65 35 100 40

Table 2 e question banksQC 01 02 03 04 05 06 07 08 09 10OD 03 02 08 07 04 06 05 08 02 03QD 100 110 35 40 65 60 60 35 110 100

Scientific Programming 7

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 8: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

also high In order to improve the initial population weapply SA in the initial population creation step of migrationparallel MMPSO instead of the random method SA wasselected since it is capable of escaping local optimums inKharrat and Neji [35] In this study the process of findingnew solutions using SA is improved by moving Pbest to Gbestusing the received information about the location of Gbest(which is commonly used in PSO) e MMPSO with SA isdescribed by a pseudocode in Algorithm 2

5 Experimental Studies

51 Experimental Environment and Data Bui et al [3]evaluated different algorithms such as the random methodgenetic algorithms and PSO-based algorithm for extractingtests from a question bank of varying sizes e results of theexperiment showed that the PSO-based algorithms are betterthan others Hence the experiment in this paper onlyevaluated and compared the improved SA parallel MMPSOalgorithm with the normal parallel MMPSO algorithm interms of the diversity of tests and the accuracy of solutions

All proposed algorithms are implemented in C andrun on 2 computers which are a 25 GHz Desktop PC (4-CPUs 4 GB RAM Windows 10) and a 29 GHz VPS (16-CPUs 16GB RAM Windows Server 2012) e experi-mental data include 2 question banks One is with 998

different questions (the small question bank) and the otherone is with 12000 different questions (the large questionbank) e link to the data is httpsdrivegooglecomfiled1_EdCUNyqC9IGziFUIf4mqs0G1qHtQyGIview esmall question bank consists of multiple sections and eachsection has more than 150 questions with different diffi-culty levels (Figure 2) e large question bank includes12000 different questions in which each part has 1000questions with different difficulty levels (Figure 3) eexperimental parameters of MMPSO are presented inTable 4 e results are shown in Tables 5 and 6 andFigures 4 and 5

Our experiments focus on implementing formula (3)and an evaluation of the two functions which are the averagelevels of difficulty requirements of the tests F1 and total testduration F2

52 EvaluationMethod In this part we present the formulafor the evaluation of all algorithms about their stability toproduce required tests with various weight constraints (α)emainmeasure is the standard deviation which is definedas follows

A

1113936zi1 yiminus y( 1113857

2

z minus 1

1113971

(6)

Function Migration_Par_MMPSO Extracting tests using Migration Parallel MMPSOFor each available thread t do

Generate the initial population with random questionsWhile stop conditions are not met doGather Gbest and all PbestForeach PbestMove Pbest towards Gbest using location information of GbestUpdate velocity using equation (4)Update position using equation (5)

End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met then

Execute function Migration_MMPSO with tEnd if

End whileEnd for

Function Migration _MMPSO Improving solutions with migration methodLock the current thread (ie block all modifications from other threads to the current thread) to avoid interference from other

threads to the current thread during migration procedureSelect λ which are the set of stronger individuals for migration except for the GbestLock other threads so that no unintended changes will happen to them during the migrationChoose a thread that has a Gbest weaker than the one in the current threadUnlock the other threads except for chosen threadSet the status of the chosen thread to ldquoExchangingrdquoMove the λ selected individuals to the chosen threadRemove those λ selected individualsSelect the λ weakest individuals in the chosen threadAdd those λ weakest individuals to the current threadSet the status of chosen thread to ldquoAvailablerdquo

Unlock the current thread and the chosen thread

ALGORITHM 1 Pseudocode migration parallel MMPSO

8 Scientific Programming

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 9: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

Start

Input of extractionrequirements

Thread 1 Thread 2 Thread k

Using SA to generate initial population

F = sumαiFi

Migration condition

Migration

Gbest and Pbest selection

Check stopping conditions

Get the solution

End

Pbest approaches Gbest

Gbest approaches the goal

Y

N

Y

N

Single PSOcombine with SA

Figure 1 e flowchart of the MMPSO algorithm in migration parallel

Function Migration_Par_MMPSO_SA Extracting tests using Migration Parallel MMPSO and SAFor each available thread t doGenerate the initial population by using SA as it is capable of escaping local minimums In this stage the process of finding new solutionsusing SA is improved by moving Pbest to Gbest using the received information about the location of Gbest and remove any incorrectsolutionsWhile stop conditions are not met do

Gather Gbest and all PbestFor each PbestMove Pbest towards Gbest using location information of GbestApply velocity update using (4)Apply position update using (5)End forGbest moves in a random direction to search for the optimal solutionIf the probability for migration c is met thenExecute function Migration_MMPSO with tEnd if

End whileEnd for

ALGORITHM 2 Pseudocode migration parallel MMPSO with SA

Scientific Programming 9

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 10: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 1000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of questionin small question bank

Difficulty level of questionTime of question

Figure 2 e allocation of the difficulty level and time of question in the small question bank

0

20

40

60

80

100

120

140

0

01

02

03

04

05

06

07

08

09

1

1 12000Questionrsquos id in the bank

Tim

e of q

uesti

on

Diffi

culty

leve

l of q

uesti

on

e allocation of difficulty level and time of question in large question bank

Difficulty level of questionTime of question

Figure 3 e allocation of the difficulty level and time of question in the large question bank

Table 4 Experimental parameterse required level of difficulty (ODR) 05e total test time (TR) 5400 (seconds)e value of α [01 09]e number of required questions in a test 100e number of questions in each section in thetest 10

10 Scientific Programming

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 11: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

Table 4 Continuede number of simultaneously generated testsin each run 100

e number of questions in the bank 1000 and 12000

e PSOrsquos parameters

e number of particles in each swarm 10Random value r1 r2 are in [01]

e percentage of Pbest individuals which receive position information fromGbest (C1) 5e percentage of Gbest which moves to final goals (C2) 5

e percentage of migration δ 10e percentage of migration probability c 5

e stop condition either when the tolerance fitness lt0001 or when the number ofmovement loops gt1000

e SArsquos parameters

Initial temperature 100Cooling rate 09

Termination temperature 001Number of iterations 100

Table 5 Experimental results in the small question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 11 613658 99975 0003102 243 0000707102 50 445 479793 98103 0003117 264 0001470703 50 425 358007 95753 0004150 273 0002177204 50 530 305070 92810 0004850 280 0002797305 50 774 295425 87765 0004922 285 0003338306 50 1410 226973 75482 0003965 291 0003400507 50 2900 149059 47013 0002026 297 0002246108 50 3005 167581 48831 0001709 301 0001727109 50 3019 285975 61934 0001358 304 00009634

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 4 1422539 99998 0003080 298 0000649602 50 2912 1323828 90042 0001265 327 0000745403 50 3681 1119513 65042 0001123 338 0000836404 50 3933 1000204 47491 0001085 344 0000990505 50 4311 917621 31875 0000938 348 0000843906 50 4776 847441 16123 0000746 353 0000512407 50 4990 811127 7675 0000666 354 0000242108 50 4978 846747 13132 0000679 352 0000251809 50 4937 988690 33989 0000749 341 00002338

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 575 513890 95929 0002138 519 0000809102 50 1426 332578 83787 0002119 560 0001180403 50 1518 253135 77976 0002587 582 0001713004 50 1545 210524 74536 0002977 589 0002184505 50 1650 179976 71092 0003177 595 0002537406 50 1751 160573 68097 0003272 592 0002853107 50 2463 129467 54021 0002243 594 0002216108 50 3315 127420 40275 0001374 590 0001285209 50 3631 191735 43927 0001067 585 00006259

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 816 1396821 95242 0002183 382 0000903902 50 3641 1114438 63808 0001039 519 0000533603 50 3958 989966 46353 0000984 536 0000634904 50 4084 926536 35713 0000973 530 0000747505 50 4344 883098 25546 0000898 510 0000744206 50 4703 834776 14400 0000758 490 0000493907 50 4874 812981 8457 0000697 470 0000327108 50 4955 842094 10668 0000683 445 0000260909 50 4937 941685 26730 0000746 419 00002345

Scientific Programming 11

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 12: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

Table 6 Experimental results in the large question bank

AlgorithmsWeight

constraint(α)

Numberof runs

Successfultimes

Averageruntime for

extracting tests(second)

Averagenumber ofiterationloops

Averagefitness

Averageduplicate

()

Standarddeviation

Parallel multiswarmmultiobjective PSO(parallel MMPSO)

01 50 2931 2350 88885 0001137 095 000047602 50 4999 1405 48433 0000725 103 000021903 50 4997 956 29680 0000689 104 000023404 50 4999 599 19055 0000676 104 000023305 50 5000 361 12124 0000668 105 000023606 50 5000 277 7932 0000663 105 000023507 50 5000 322 9233 0000669 105 000023808 50 5000 492 17319 0000673 104 000023109 50 5000 1098 38413 0000738 102 0000213

Parallel multiswarmmultiobjective PSO withSA (parallel MMPSO withSA)

01 50 3055 9975 89040 0001095 096 000043202 50 5000 8490 46923 0000709 104 000022403 50 5000 7443 27554 0000686 105 000023004 50 5000 7303 16891 0000668 106 000023705 50 5000 6988 9992 0000663 107 000023506 50 5000 6734 6142 0000662 107 000023607 50 5000 5343 6902 0000661 107 000023508 50 5000 7006 13227 0000676 106 000023609 50 5000 7958 31936 0000734 103 0000211

Migration parallelmultiswarm multiobjectivePSO (migration parallelMMPSO)

01 50 2943 3352 88690 0001144 095 000048202 50 4995 1933 48860 0000724 102 000021903 50 4998 1243 29569 0000688 104 000023104 50 5000 857 19020 0000667 104 000023805 50 4999 602 12088 0000665 105 000024006 50 5000 444 7889 0000669 104 000023407 50 5000 498 9253 0000668 105 000023408 50 5000 794 17198 0000669 104 000023609 50 5000 1576 38309 0000738 102 0000209

Migration parallelmultiswarm multiobjectivePSO with SA (migrationparallel MMPSO with SA)

01 50 3122 10200 88848 0001091 096 000043602 50 5000 8568 46950 0000716 104 000022203 50 5000 7735 27603 0000678 105 000023504 50 5000 7319 16784 0000674 106 000023405 50 5000 6962 9966 0000665 106 000023806 50 5000 6783 6133 0000660 107 000023707 50 5000 6419 6902 0000666 107 000023708 50 5000 6146 13345 0000673 106 000023409 50 5000 7162 31968 0000731 103 0000213

000065000115000165000215000265000315000365000415000465000515

020406080

100120140

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the small question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 4 Experimental results of the runtime and fitness value are in Table 4

12 Scientific Programming

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 13: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

where z is the number of experimental runs y is the averagefitness of all runs yi is a fitness value of run ith

e standard deviation is used to assess the stability ofthe algorithms If its value is low then the generated tests ofeach run do not have much difference in the fitness valuee weight constraint α is also being examined as it bal-ances the objective functions In our cases a change in αcan shift the importance towards the test duration con-straint the test difficulty constraint or the balance betweenthose two We can select α to suit what we require em-phasizing more on either the test duration or the testdifficulty

53 Experimental Results e experiments are executedwith the parameters following Ridge and Kudenko [36] inTable 3 and the results are presented in Table 5 (run oncomputer 4-CPUs) and Table 6 (run on computer 16-CPUs)e comparisons of runtime and fitness of the small andlarge question bank are presented in Figures 4 and 5 Re-garding Tables 5 and 6 each run extracts 100 tests simul-taneously and each test has a fitness value Each run alsorequires several iteration loops to successfully extract 100candidate tests e average runtime for extracting tests isthe average runtimes of all 50 experimental runs e av-erage number of iteration loops is the average of all requiredloops of all 50 runs e average fitness is the average of allfitness values of 5000 generated tests e average duplicateindicates the average number of duplicate questions among100 generated tests of all 50 runs e average duplicate isalso used to indicate the diversity of tests e lower thevalue the more diverse the tests

When α is at the lower range [01 05] the correctnessfor difficulty value of each generated test is emphasized morethan that of the total test time Based on the average fitnessvalue all algorithms appear to have a harder time generatingtests at the lower range [01 05] compared with at the higherrange [05 09] Additionally when α increases the runtimestarts to decrease the fitness gets better (ie the fitnessvalues get smaller) and the numbers of loops required forgenerating tests decrease Apparently satisfying the

requirement for the test difficulty requirement is harder thansatisfying the requirement for total test timee experimentresults also show that integrating SA gives a better fitnessvalue without SA However runtimes of algorithms with SAare longer as a trade-off for better fitness values

All algorithms can generate tests with acceptable per-centages of duplicate questions among generated tests eduplicate question proportions between generated testsdepend on the sizes of the question bank For example if thequestion bankrsquos size is 100 we need to generate 50 tests in asingle run and each test contains 30 questions and thensome generated tests should contain similar questions of theother generated tests

Based on the standard deviation in Tables 5 and 6 allMMPSO algorithms with SA are more stable than thosewithout SA since the standard deviation values of those withSA are smaller In other words the differences in fitnessvalues between runs are smaller with SA than without SAe smaller standard deviation values and smaller averagefitness values also mean that we less likely need to rerun theMMPSO with SA algorithms many times to get the gen-erated tests that better fit the test requirementse reason isthat the generated tests we obtain at the first run are likelyclose to the requirements (due to the low fitness value) andthe chance that we obtain those generated tests with less fit torequirements is low (due to the low standard deviationvalue)

6 Conclusions and Future Studies

Generation of question papers through a question bank is animportant activity in extracting multichoice tests equality of multichoice questions is good (diversity of thelevel of difficulty of the question and a large number ofquestions in question bank) In this paper we propose theuse of MMPSO to solve the problem of generating multi-objective multiple k tests in a single rune objectives of thetests are the required level of difficulty and the required totaltest timee experiments evaluate two algorithms MMPSOwith SA and normal MMPSO e results indicate thatMMPSO with SA gives better solutions than normal

000065

000075

000085

000095

000105

000115

000125

0

5

10

15

20

25

30

35

01 02 03 04 05 06 07 08 09

Fitn

ess v

alue

Runt

ime (

s)

Weight constraint (α)

Experimental results of the large question bank

Parallel MMPSO (runtime)Parallel MMPSO with SA (runtime)Parallel MMPSO (fitness value)Parallel MMPSO with SA (fitness value)

Migration parallel MMPSO (runtime)Migration parallel MMPSO with SA (runtime)Migration parallel MMPSO (fitness value)Migration parallel MMPSO with SA (fitness value)

Figure 5 Experimental results of the runtime and fitness value are in Table 5

Scientific Programming 13

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 14: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

MMPSO based on various criteria such as diversities ofsolutions and numbers of successful attempts

Future studies may focus on investigating the use of theproposed hybrid approach [37 38] to solve other NP-hardand combinatorial optimization problems which focus onfine-tuning the PSO parameters by using some type ofadaptive strategies Additionally we will extend our problemto provide feedback to instructors from multiple-choicedata such as using fuzzy theory [39] and PSO with SA formining association rules to compute the difficulty levels ofquestions

Data Availability

e data used in this study are available from the corre-sponding author upon request

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

References

[1] F Ting J H Wei C T Kim and Q Tian ldquoQuestion clas-sification for E-learning by artificial neural networkrdquo inProceedings of the Fourth International Conference on Infor-mation Communications and Signal Processing pp 1575ndash1761 Hefei China 2003

[2] S C Cheng Y T Lin and Y M Huang ldquoDynamic questiongeneration system for web-based testing using particle swarmoptimizationrdquo Expert Systems with Applications vol 36 no 1pp 616ndash624 2009

[3] T Bui T Nguyen B Vo T Nguyen W Pedrycz andV Snasel ldquoApplication of particle swarm optimization tocreate multiple-choice testsrdquo Journal of Information Science ampEngineering vol 34 no 6 pp 1405ndash1423 2018

[4] M Yildirim ldquoHeuristic optimization methods for generatingtest from a question bankrdquo Advances in Artificial Intelligencepp 1218ndash1229 Springer Berlin Germany 2007

[5] M Yildirim ldquoA genetic algorithm for generating test from aquestion bankrdquo Computer Applications in Engineering Edu-cation vol 18 no 2 pp 298ndash305 2010

[6] B R Antonio and S Chen ldquoMulti-swarm hybrid for multi-modal optimizationrdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1759ndash1766 Brisbane Aus-tralia June 2012

[7] N Hou F He Y Zhou Y Chen and X Yan ldquoA parallelgenetic algorithm with dispersion correction for HWSWpartitioning on multi-core CPU and many-core GPUrdquo IEEEAccess vol 6 pp 883ndash898 2018

[8] M E C Bento D Dotta R Kuiava and R A Ramos ldquoAprocedure to design fault-tolerant wide-area damping con-trollersrdquo IEEE Access vol 6 pp 23383ndash23405 2018

[9] C Han L Wang Z Zhang J Xie and Z Xing ldquoA multi-objective genetic algorithm based on fitting and interpola-tionrdquo IEEE Access vol 6 pp 22920ndash22929 2018

[10] J Lu Z Chen Y Yang and L V Ming ldquoOnline estimation ofstate of power for lithium-ion batteries in electric vehiclesusing genetic algorithmrdquo IEEE Access vol 6 pp 20868ndash20880 2018

[11] X-Y Liu Y Liang S Wang Z-Y Yang and H-S Ye ldquoAhybrid genetic algorithmwith wrapper-embedded approaches

for feature selectionrdquo IEEE Access vol 6 pp 22863ndash228742018

[12] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia 1995

[13] G Sen M Krishnamoorthy N Rangaraj and V NarayananldquoExact approaches for static data segment allocation problemin an information networkrdquo Computers amp Operations Re-search vol 62 pp 282ndash295 2015

[14] G Sen and M Krishnamoorthy ldquoDiscrete particle swarmoptimization algorithms for two variants of the static datasegment location problemrdquoApplied Intelligence vol 48 no 3pp 771ndash790 2018

[15] M Q Peng Y J Gong J J Li and Y B Lin ldquoMulti-swarmparticle swarm optimization withmultiple learning strategiesrdquo inProceedings of the 2014 Annual Conference on Genetic andEvolutionary Computation pp15-16 NewYork NYUSA 2014

[16] R Vafashoar and M R Meybodi ldquoMulti swarm optimizationalgorithm with adaptive connectivity degreerdquo Applied Intel-ligence vol 48 no 4 pp 909ndash941 2017

[17] X Xia L Gui and Z-H Zhan ldquoA multi-swarm particleswarm optimization algorithm based on dynamical topologyand purposeful detectingrdquo Applied Soft Computing vol 67pp 126ndash140 2018

[18] B Nakisa M N Rastgoo M F Nasrudin M Zakree andA Nazri ldquoA multi-swarm particle swarm optimization withlocal search on multi-robot search systemrdquo Journal of Ie-oretical and Applied Information Technology vol 71 no 1pp 129ndash136 2015

[19] M Li and F Babak ldquoMulti-objective particle swarm opti-mization with gradient descent searchrdquo International Journalof Swarm Intelligence and Evolutionary Computation vol 4pp 1ndash9 2014

[20] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization based on utopia pointguided searchrdquo International Journal of Applied MetaheuristicComputing vol 9 no 4 pp 71ndash96 2018

[21] T Cheng M F P J Chen Z Yang and S Gan ldquoA novelhybrid teaching learning based multi-objective particle swarmoptimizationrdquo Neurocomputing vol 222 pp 11ndash25 2016

[22] H A Adel L Songfeng E A A Yahya and A G A ArkanldquoA particle swarm optimization based on multi objectivefunctions with uniform designrdquo SSRG International Journal ofComputer Science and Engineering vol 3 pp 20ndash26 2016

[23] D M Alan T Gregorio H B Z Jose and T L Edgar ldquoR2-based multimany-objective particle swarm optimizationrdquoComputational Intelligence and Neuroscience vol 2016 Ar-ticle ID 1898527 10 pages 2016

[24] S P Kapse and S Krishnapillai ldquoAn improved multi-ob-jective particle swarm optimization algorithm based onadaptive local searchrdquo International Journal of Applied Evo-lutionary Computation vol 8 no 2 pp 1ndash29 2017

[25] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[26] Z H Zhan J Li and J Cao ldquoMultiple populations formultiple objectives a coevolutionary technique for solvingmultiobjective optimization problemsrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 445ndash463 2013

[27] N ShiZhang and K K Mishra ldquoImproved multi-objectiveparticle swarm optimization algorithm for optimizing wa-termark strength in color image watermarkingrdquo AppliedIntelligence vol 47 no 2 pp 362ndash381 2017

14 Scientific Programming

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15

Page 15: Multiswarm Multiobjective Particle Swarm Optimization with …downloads.hindawi.com/journals/sp/2020/7081653.pdf · ResearchArticle Multiswarm Multiobjective Particle Swarm Optimization

[28] X Liansong and P Dazhi ldquoMulti-objective optimizationbased on chaotic particle swarm optimizationrdquo InternationalJournal of Machine Learning and Computing vol 8 no 3pp 229ndash235 2018

[29] Y Xiang and Z Xueqing ldquoMulti-swarm comprehensivelearning particle swarm optimization for solving multi-ob-jective optimization problemsrdquo PLoS One vol 12 no 2Article ID e 2017

[30] V Mokarram and M R Banan ldquoA new PSO-based algorithmfor multi-objective optimization with continuous and discretedesign variablesrdquo Structural and Multidisciplinary Optimi-zation vol 57 no 2 pp 509ndash533 2017

[31] Z B M Z Mohd J Kanesan J H Chuah S Dhanapal andG Kendall ldquoA multi-objective particle swarm optimizationalgorithm based on dynamic boundary search for constrainedoptimizationrdquo Applied Soft Computing vol 70 pp 680ndash7002018

[32] R Liu J Li J Fan C Mu and L Jiao ldquoA coevolutionarytechnique based on multi-swarm particle swarm optimizationfor dynamic multi-objective optimizationrdquo European Journalof Operational Research vol 261 no 3 pp 1028ndash1051 2017

[33] T Nguyen T Bui and B Vo ldquoMulti-swarm single-objectiveparticle swarm optimization to extract multiple-choice testsrdquoVietnam Journal of Computer Science vol 6 no 2 pp 147ndash161 2019

[34] D Hunt ldquoW A Lewis on ldquoeconomic development withunlimited supplies of labourrdquordquo Economic Ieories of Devel-opment An Analysis of Competing Paradigms pp 87ndash95Harvester Wheatsheaf New York NY USA 1989

[35] A Kharrat and M Neji ldquoA hybrid feature selection for MRIbrain tumor classificationrdquo Innovations in Bio-InspiredComputing and Applications pp 329ndash338 Springer BerlinGermany 2018

[36] E Ridge and D Kudenko ldquoTuning an algorithm using designof experimentsrdquo Experimental Methods for the Analysis ofOptimization Algorithms pp 265ndash286 Springer BerlinGermany 2010

[37] P Riccardo I Umberto L Giampaolo and L StefanoldquoGloballocal hybridization of the multi-objective particleswarm optimization with derivative-free multi-objective localsearchrdquo in Proceedings of the Congress of the Italian Society ofIndustrial and Applied Mathematics (SIMAI) pp 198ndash209 AtMilan Italy 2017

[38] S Sedarous S M El-Gokhy and E Sallam ldquoMulti-swarmmulti-objective optimization based on a hybrid strategyrdquoAlexandria Engineering Journal vol 57 no 3 pp 1619ndash16292018

[39] H S Le and H Fujita ldquoNeural-fuzzy with representative setsfor prediction of student performancerdquo Applied Intelligencevol 49 no 1 pp 172ndash187 2019

Scientific Programming 15